00:00:00.001 Started by upstream project "autotest-per-patch" build number 131992 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.094 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.131 Using shallow fetch with depth 1 00:00:00.131 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.131 > git --version # timeout=10 00:00:00.181 > git --version # 'git version 2.39.2' 00:00:00.181 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.218 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.218 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.952 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.964 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.974 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:03.974 > git config core.sparsecheckout # timeout=10 00:00:03.986 > git read-tree -mu HEAD # timeout=10 00:00:04.003 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:04.028 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:04.028 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:04.120 [Pipeline] Start of Pipeline 00:00:04.130 [Pipeline] library 00:00:04.131 Loading library shm_lib@master 00:00:04.131 Library shm_lib@master is cached. Copying from home. 00:00:04.145 [Pipeline] node 00:00:04.153 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:04.155 [Pipeline] { 00:00:04.163 [Pipeline] catchError 00:00:04.164 [Pipeline] { 00:00:04.176 [Pipeline] wrap 00:00:04.185 [Pipeline] { 00:00:04.193 [Pipeline] stage 00:00:04.195 [Pipeline] { (Prologue) 00:00:04.209 [Pipeline] echo 00:00:04.210 Node: VM-host-SM17 00:00:04.214 [Pipeline] cleanWs 00:00:04.222 [WS-CLEANUP] Deleting project workspace... 00:00:04.222 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.228 [WS-CLEANUP] done 00:00:04.419 [Pipeline] setCustomBuildProperty 00:00:04.491 [Pipeline] httpRequest 00:00:05.090 [Pipeline] echo 00:00:05.092 Sorcerer 10.211.164.20 is alive 00:00:05.100 [Pipeline] retry 00:00:05.102 [Pipeline] { 00:00:05.113 [Pipeline] httpRequest 00:00:05.117 HttpMethod: GET 00:00:05.118 URL: http://10.211.164.20/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.118 Sending request to url: http://10.211.164.20/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.120 Response Code: HTTP/1.1 200 OK 00:00:05.120 Success: Status code 200 is in the accepted range: 200,404 00:00:05.121 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.654 [Pipeline] } 00:00:05.667 [Pipeline] // retry 00:00:05.675 [Pipeline] sh 00:00:05.954 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.966 [Pipeline] httpRequest 00:00:06.293 [Pipeline] echo 00:00:06.294 Sorcerer 10.211.164.20 is alive 00:00:06.300 [Pipeline] retry 00:00:06.301 [Pipeline] { 00:00:06.311 [Pipeline] httpRequest 00:00:06.315 HttpMethod: GET 00:00:06.315 URL: http://10.211.164.20/packages/spdk_fcc19e276d2ed75b75e3022bd9033b442bef5cc5.tar.gz 00:00:06.316 Sending request to url: http://10.211.164.20/packages/spdk_fcc19e276d2ed75b75e3022bd9033b442bef5cc5.tar.gz 00:00:06.324 Response Code: HTTP/1.1 200 OK 00:00:06.324 Success: Status code 200 is in the accepted range: 200,404 00:00:06.325 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk_fcc19e276d2ed75b75e3022bd9033b442bef5cc5.tar.gz 00:00:44.965 [Pipeline] } 00:00:44.982 [Pipeline] // retry 00:00:44.990 [Pipeline] sh 00:00:45.269 + tar --no-same-owner -xf spdk_fcc19e276d2ed75b75e3022bd9033b442bef5cc5.tar.gz 00:00:48.569 [Pipeline] sh 00:00:48.885 + git -C spdk log --oneline -n5 00:00:48.886 fcc19e276 nvme/perf: interrupt mode support for pcie controller 00:00:48.886 b8c65ccf8 bdev/nvme: interrupt mode for PCIe transport 00:00:48.886 f7ed8cd63 lib/nvme: eventfd to handle disconnected I/O qpair 00:00:48.886 6de686443 nvme/poll_group: create and manage fd_group for nvme poll group 00:00:48.886 1efa1b16d nvme: interface to check disconnected queue pairs 00:00:48.905 [Pipeline] writeFile 00:00:48.921 [Pipeline] sh 00:00:49.203 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:49.214 [Pipeline] sh 00:00:49.493 + cat autorun-spdk.conf 00:00:49.493 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.493 SPDK_TEST_NVMF=1 00:00:49.493 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.493 SPDK_TEST_URING=1 00:00:49.493 SPDK_TEST_USDT=1 00:00:49.493 SPDK_RUN_UBSAN=1 00:00:49.493 NET_TYPE=virt 00:00:49.493 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.500 RUN_NIGHTLY=0 00:00:49.502 [Pipeline] } 00:00:49.515 [Pipeline] // stage 00:00:49.530 [Pipeline] stage 00:00:49.533 [Pipeline] { (Run VM) 00:00:49.546 [Pipeline] sh 00:00:49.825 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:49.825 + echo 'Start stage prepare_nvme.sh' 00:00:49.825 Start stage prepare_nvme.sh 00:00:49.825 + [[ -n 5 ]] 00:00:49.825 + disk_prefix=ex5 00:00:49.825 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 ]] 00:00:49.825 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf ]] 00:00:49.825 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf 00:00:49.825 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.825 ++ SPDK_TEST_NVMF=1 00:00:49.825 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.825 ++ SPDK_TEST_URING=1 00:00:49.825 ++ SPDK_TEST_USDT=1 00:00:49.825 ++ SPDK_RUN_UBSAN=1 00:00:49.825 ++ NET_TYPE=virt 00:00:49.825 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:49.825 ++ RUN_NIGHTLY=0 00:00:49.825 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:49.825 + nvme_files=() 00:00:49.825 + declare -A nvme_files 00:00:49.825 + backend_dir=/var/lib/libvirt/images/backends 00:00:49.825 + nvme_files['nvme.img']=5G 00:00:49.825 + nvme_files['nvme-cmb.img']=5G 00:00:49.825 + nvme_files['nvme-multi0.img']=4G 00:00:49.825 + nvme_files['nvme-multi1.img']=4G 00:00:49.825 + nvme_files['nvme-multi2.img']=4G 00:00:49.825 + nvme_files['nvme-openstack.img']=8G 00:00:49.825 + nvme_files['nvme-zns.img']=5G 00:00:49.825 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:49.825 + (( SPDK_TEST_FTL == 1 )) 00:00:49.825 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:49.825 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:49.825 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:49.825 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:49.825 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:49.825 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:49.825 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:49.825 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:49.825 + for nvme in "${!nvme_files[@]}" 00:00:49.825 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:50.083 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.083 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:50.342 + echo 'End stage prepare_nvme.sh' 00:00:50.342 End stage prepare_nvme.sh 00:00:50.352 [Pipeline] sh 00:00:50.633 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:50.633 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:50.633 00:00:50.633 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/scripts/vagrant 00:00:50.633 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk 00:00:50.633 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:50.633 HELP=0 00:00:50.633 DRY_RUN=0 00:00:50.633 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:50.633 NVME_DISKS_TYPE=nvme,nvme, 00:00:50.633 NVME_AUTO_CREATE=0 00:00:50.633 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:50.633 NVME_CMB=,, 00:00:50.633 NVME_PMR=,, 00:00:50.633 NVME_ZNS=,, 00:00:50.633 NVME_MS=,, 00:00:50.633 NVME_FDP=,, 00:00:50.633 SPDK_VAGRANT_DISTRO=fedora39 00:00:50.633 SPDK_VAGRANT_VMCPU=10 00:00:50.633 SPDK_VAGRANT_VMRAM=12288 00:00:50.633 SPDK_VAGRANT_PROVIDER=libvirt 00:00:50.633 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:50.633 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:50.633 SPDK_OPENSTACK_NETWORK=0 00:00:50.633 VAGRANT_PACKAGE_BOX=0 00:00:50.633 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/scripts/vagrant/Vagrantfile 00:00:50.633 FORCE_DISTRO=true 00:00:50.633 VAGRANT_BOX_VERSION= 00:00:50.633 EXTRA_VAGRANTFILES= 00:00:50.633 NIC_MODEL=e1000 00:00:50.633 00:00:50.633 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt' 00:00:50.633 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4 00:00:53.919 Bringing machine 'default' up with 'libvirt' provider... 00:00:54.485 ==> default: Creating image (snapshot of base box volume). 00:00:54.744 ==> default: Creating domain with the following settings... 00:00:54.744 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730713826_973b7577e6c5267a8d60 00:00:54.744 ==> default: -- Domain type: kvm 00:00:54.744 ==> default: -- Cpus: 10 00:00:54.744 ==> default: -- Feature: acpi 00:00:54.744 ==> default: -- Feature: apic 00:00:54.744 ==> default: -- Feature: pae 00:00:54.744 ==> default: -- Memory: 12288M 00:00:54.744 ==> default: -- Memory Backing: hugepages: 00:00:54.744 ==> default: -- Management MAC: 00:00:54.744 ==> default: -- Loader: 00:00:54.744 ==> default: -- Nvram: 00:00:54.744 ==> default: -- Base box: spdk/fedora39 00:00:54.744 ==> default: -- Storage pool: default 00:00:54.744 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730713826_973b7577e6c5267a8d60.img (20G) 00:00:54.744 ==> default: -- Volume Cache: default 00:00:54.744 ==> default: -- Kernel: 00:00:54.744 ==> default: -- Initrd: 00:00:54.744 ==> default: -- Graphics Type: vnc 00:00:54.744 ==> default: -- Graphics Port: -1 00:00:54.744 ==> default: -- Graphics IP: 127.0.0.1 00:00:54.744 ==> default: -- Graphics Password: Not defined 00:00:54.744 ==> default: -- Video Type: cirrus 00:00:54.744 ==> default: -- Video VRAM: 9216 00:00:54.744 ==> default: -- Sound Type: 00:00:54.744 ==> default: -- Keymap: en-us 00:00:54.744 ==> default: -- TPM Path: 00:00:54.744 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:54.744 ==> default: -- Command line args: 00:00:54.744 ==> default: -> value=-device, 00:00:54.744 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:54.744 ==> default: -> value=-drive, 00:00:54.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:54.744 ==> default: -> value=-device, 00:00:54.744 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.744 ==> default: -> value=-device, 00:00:54.744 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:54.744 ==> default: -> value=-drive, 00:00:54.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:54.744 ==> default: -> value=-device, 00:00:54.744 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.744 ==> default: -> value=-drive, 00:00:54.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:54.744 ==> default: -> value=-device, 00:00:54.744 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.744 ==> default: -> value=-drive, 00:00:54.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:54.744 ==> default: -> value=-device, 00:00:54.744 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.744 ==> default: Creating shared folders metadata... 00:00:54.744 ==> default: Starting domain. 00:00:56.119 ==> default: Waiting for domain to get an IP address... 00:01:14.223 ==> default: Waiting for SSH to become available... 00:01:14.223 ==> default: Configuring and enabling network interfaces... 00:01:16.774 default: SSH address: 192.168.121.31:22 00:01:16.774 default: SSH username: vagrant 00:01:16.774 default: SSH auth method: private key 00:01:19.305 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.421 ==> default: Mounting SSHFS shared folder... 00:01:28.798 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.798 ==> default: Checking Mount.. 00:01:30.173 ==> default: Folder Successfully Mounted! 00:01:30.173 ==> default: Running provisioner: file... 00:01:30.739 default: ~/.gitconfig => .gitconfig 00:01:31.306 00:01:31.306 SUCCESS! 00:01:31.306 00:01:31.306 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt and type "vagrant ssh" to use. 00:01:31.306 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:31.306 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt" to destroy all trace of vm. 00:01:31.306 00:01:31.314 [Pipeline] } 00:01:31.328 [Pipeline] // stage 00:01:31.337 [Pipeline] dir 00:01:31.338 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/fedora39-libvirt 00:01:31.340 [Pipeline] { 00:01:31.353 [Pipeline] catchError 00:01:31.355 [Pipeline] { 00:01:31.367 [Pipeline] sh 00:01:31.646 + vagrant ssh-config --host vagrant 00:01:31.646 + sed -ne /^Host/,$p 00:01:31.646 + tee ssh_conf 00:01:35.846 Host vagrant 00:01:35.846 HostName 192.168.121.31 00:01:35.846 User vagrant 00:01:35.846 Port 22 00:01:35.846 UserKnownHostsFile /dev/null 00:01:35.846 StrictHostKeyChecking no 00:01:35.846 PasswordAuthentication no 00:01:35.846 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:35.846 IdentitiesOnly yes 00:01:35.846 LogLevel FATAL 00:01:35.846 ForwardAgent yes 00:01:35.846 ForwardX11 yes 00:01:35.846 00:01:35.860 [Pipeline] withEnv 00:01:35.862 [Pipeline] { 00:01:35.877 [Pipeline] sh 00:01:36.158 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:36.158 source /etc/os-release 00:01:36.158 [[ -e /image.version ]] && img=$(< /image.version) 00:01:36.158 # Minimal, systemd-like check. 00:01:36.158 if [[ -e /.dockerenv ]]; then 00:01:36.158 # Clear garbage from the node's name: 00:01:36.158 # agt-er_autotest_547-896 -> autotest_547-896 00:01:36.158 # $HOSTNAME is the actual container id 00:01:36.158 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:36.158 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:36.158 # We can assume this is a mount from a host where container is running, 00:01:36.158 # so fetch its hostname to easily identify the target swarm worker. 00:01:36.158 container="$(< /etc/hostname) ($agent)" 00:01:36.158 else 00:01:36.158 # Fallback 00:01:36.158 container=$agent 00:01:36.158 fi 00:01:36.158 fi 00:01:36.158 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:36.158 00:01:36.430 [Pipeline] } 00:01:36.446 [Pipeline] // withEnv 00:01:36.455 [Pipeline] setCustomBuildProperty 00:01:36.471 [Pipeline] stage 00:01:36.474 [Pipeline] { (Tests) 00:01:36.491 [Pipeline] sh 00:01:36.772 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:37.045 [Pipeline] sh 00:01:37.340 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:37.355 [Pipeline] timeout 00:01:37.355 Timeout set to expire in 1 hr 0 min 00:01:37.357 [Pipeline] { 00:01:37.372 [Pipeline] sh 00:01:37.655 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:38.224 HEAD is now at fcc19e276 nvme/perf: interrupt mode support for pcie controller 00:01:38.237 [Pipeline] sh 00:01:38.517 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:38.789 [Pipeline] sh 00:01:39.068 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:39.085 [Pipeline] sh 00:01:39.367 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:39.625 ++ readlink -f spdk_repo 00:01:39.625 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:39.625 + [[ -n /home/vagrant/spdk_repo ]] 00:01:39.625 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:39.625 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:39.625 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:39.625 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:39.625 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:39.625 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:39.625 + cd /home/vagrant/spdk_repo 00:01:39.625 + source /etc/os-release 00:01:39.625 ++ NAME='Fedora Linux' 00:01:39.625 ++ VERSION='39 (Cloud Edition)' 00:01:39.625 ++ ID=fedora 00:01:39.625 ++ VERSION_ID=39 00:01:39.625 ++ VERSION_CODENAME= 00:01:39.625 ++ PLATFORM_ID=platform:f39 00:01:39.625 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:39.625 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:39.625 ++ LOGO=fedora-logo-icon 00:01:39.625 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:39.625 ++ HOME_URL=https://fedoraproject.org/ 00:01:39.625 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:39.625 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:39.625 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:39.625 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:39.625 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:39.625 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:39.625 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:39.625 ++ SUPPORT_END=2024-11-12 00:01:39.625 ++ VARIANT='Cloud Edition' 00:01:39.625 ++ VARIANT_ID=cloud 00:01:39.625 + uname -a 00:01:39.625 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:39.625 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:39.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:39.884 Hugepages 00:01:39.884 node hugesize free / total 00:01:39.884 node0 1048576kB 0 / 0 00:01:40.144 node0 2048kB 0 / 0 00:01:40.144 00:01:40.144 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.144 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:40.144 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:40.144 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:40.144 + rm -f /tmp/spdk-ld-path 00:01:40.144 + source autorun-spdk.conf 00:01:40.144 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.144 ++ SPDK_TEST_NVMF=1 00:01:40.144 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.144 ++ SPDK_TEST_URING=1 00:01:40.144 ++ SPDK_TEST_USDT=1 00:01:40.144 ++ SPDK_RUN_UBSAN=1 00:01:40.144 ++ NET_TYPE=virt 00:01:40.144 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.144 ++ RUN_NIGHTLY=0 00:01:40.144 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.144 + [[ -n '' ]] 00:01:40.144 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:40.144 + for M in /var/spdk/build-*-manifest.txt 00:01:40.144 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:40.144 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.144 + for M in /var/spdk/build-*-manifest.txt 00:01:40.144 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.144 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.144 + for M in /var/spdk/build-*-manifest.txt 00:01:40.144 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.144 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.144 ++ uname 00:01:40.144 + [[ Linux == \L\i\n\u\x ]] 00:01:40.144 + sudo dmesg -T 00:01:40.144 + sudo dmesg --clear 00:01:40.144 + dmesg_pid=5200 00:01:40.144 + sudo dmesg -Tw 00:01:40.144 + [[ Fedora Linux == FreeBSD ]] 00:01:40.144 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.144 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:40.144 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:40.144 + [[ -x /usr/src/fio-static/fio ]] 00:01:40.144 + export FIO_BIN=/usr/src/fio-static/fio 00:01:40.144 + FIO_BIN=/usr/src/fio-static/fio 00:01:40.144 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:40.144 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:40.144 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:40.144 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.144 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:40.144 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:40.144 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.144 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:40.144 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.404 09:51:12 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:40.404 09:51:12 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.404 09:51:12 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:40.404 09:51:12 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:40.404 09:51:12 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:40.404 09:51:12 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:40.404 09:51:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:40.404 09:51:12 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:40.404 09:51:12 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:40.404 09:51:12 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:40.404 09:51:12 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:40.404 09:51:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.404 09:51:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.404 09:51:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.404 09:51:12 -- paths/export.sh@5 -- $ export PATH 00:01:40.404 09:51:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:40.404 09:51:12 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:40.404 09:51:12 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:40.404 09:51:12 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730713872.XXXXXX 00:01:40.404 09:51:12 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730713872.0wEDJI 00:01:40.404 09:51:12 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:40.404 09:51:12 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:40.404 09:51:12 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:40.404 09:51:12 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:40.404 09:51:12 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:40.404 09:51:12 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:40.404 09:51:12 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:40.404 09:51:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:40.404 09:51:12 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:40.404 09:51:12 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:40.404 09:51:12 -- pm/common@17 -- $ local monitor 00:01:40.404 09:51:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.404 09:51:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:40.404 09:51:12 -- pm/common@25 -- $ sleep 1 00:01:40.404 09:51:12 -- pm/common@21 -- $ date +%s 00:01:40.404 09:51:12 -- pm/common@21 -- $ date +%s 00:01:40.404 09:51:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730713872 00:01:40.404 09:51:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730713872 00:01:40.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730713872_collect-cpu-load.pm.log 00:01:40.404 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730713872_collect-vmstat.pm.log 00:01:41.360 09:51:13 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:41.360 09:51:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:41.360 09:51:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:41.360 09:51:13 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:41.360 09:51:13 -- spdk/autobuild.sh@16 -- $ date -u 00:01:41.360 Mon Nov 4 09:51:13 AM UTC 2024 00:01:41.360 09:51:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:41.360 v25.01-pre-143-gfcc19e276 00:01:41.360 09:51:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:41.360 09:51:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:41.360 09:51:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:41.360 09:51:13 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:41.360 09:51:13 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:41.360 09:51:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.360 ************************************ 00:01:41.360 START TEST ubsan 00:01:41.360 ************************************ 00:01:41.360 using ubsan 00:01:41.360 09:51:13 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:41.360 00:01:41.360 real 0m0.000s 00:01:41.360 user 0m0.000s 00:01:41.360 sys 0m0.000s 00:01:41.360 09:51:13 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:41.360 ************************************ 00:01:41.360 END TEST ubsan 00:01:41.360 ************************************ 00:01:41.360 09:51:13 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:41.360 09:51:13 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:41.360 09:51:13 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:41.360 09:51:13 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:41.360 09:51:13 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:41.360 09:51:13 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:41.360 09:51:13 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:41.360 09:51:13 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:41.360 09:51:13 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:41.360 09:51:13 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:41.620 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:41.620 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:41.879 Using 'verbs' RDMA provider 00:01:57.694 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:09.903 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:09.903 Creating mk/config.mk...done. 00:02:09.903 Creating mk/cc.flags.mk...done. 00:02:09.903 Type 'make' to build. 00:02:09.903 09:51:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:09.903 09:51:40 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:09.903 09:51:40 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:09.903 09:51:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.903 ************************************ 00:02:09.903 START TEST make 00:02:09.903 ************************************ 00:02:09.903 09:51:40 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:09.903 make[1]: Nothing to be done for 'all'. 00:02:22.108 The Meson build system 00:02:22.108 Version: 1.5.0 00:02:22.108 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:22.108 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:22.108 Build type: native build 00:02:22.108 Program cat found: YES (/usr/bin/cat) 00:02:22.108 Project name: DPDK 00:02:22.108 Project version: 24.03.0 00:02:22.108 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.108 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.108 Host machine cpu family: x86_64 00:02:22.108 Host machine cpu: x86_64 00:02:22.108 Message: ## Building in Developer Mode ## 00:02:22.108 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.108 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.108 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.108 Program python3 found: YES (/usr/bin/python3) 00:02:22.108 Program cat found: YES (/usr/bin/cat) 00:02:22.108 Compiler for C supports arguments -march=native: YES 00:02:22.108 Checking for size of "void *" : 8 00:02:22.108 Checking for size of "void *" : 8 (cached) 00:02:22.108 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:22.108 Library m found: YES 00:02:22.108 Library numa found: YES 00:02:22.108 Has header "numaif.h" : YES 00:02:22.108 Library fdt found: NO 00:02:22.108 Library execinfo found: NO 00:02:22.108 Has header "execinfo.h" : YES 00:02:22.108 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.108 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.108 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.108 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.108 Run-time dependency openssl found: YES 3.1.1 00:02:22.108 Run-time dependency libpcap found: YES 1.10.4 00:02:22.108 Has header "pcap.h" with dependency libpcap: YES 00:02:22.108 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.108 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.108 Compiler for C supports arguments -Wformat: YES 00:02:22.108 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:22.108 Compiler for C supports arguments -Wformat-security: NO 00:02:22.108 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.108 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.108 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.108 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.108 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.108 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.108 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.108 Compiler for C supports arguments -Wundef: YES 00:02:22.108 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.108 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.108 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:22.108 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.108 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:22.108 Program objdump found: YES (/usr/bin/objdump) 00:02:22.108 Compiler for C supports arguments -mavx512f: YES 00:02:22.108 Checking if "AVX512 checking" compiles: YES 00:02:22.108 Fetching value of define "__SSE4_2__" : 1 00:02:22.108 Fetching value of define "__AES__" : 1 00:02:22.108 Fetching value of define "__AVX__" : 1 00:02:22.108 Fetching value of define "__AVX2__" : 1 00:02:22.108 Fetching value of define "__AVX512BW__" : (undefined) 00:02:22.108 Fetching value of define "__AVX512CD__" : (undefined) 00:02:22.108 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:22.108 Fetching value of define "__AVX512F__" : (undefined) 00:02:22.108 Fetching value of define "__AVX512VL__" : (undefined) 00:02:22.108 Fetching value of define "__PCLMUL__" : 1 00:02:22.108 Fetching value of define "__RDRND__" : 1 00:02:22.108 Fetching value of define "__RDSEED__" : 1 00:02:22.108 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.108 Fetching value of define "__znver1__" : (undefined) 00:02:22.108 Fetching value of define "__znver2__" : (undefined) 00:02:22.108 Fetching value of define "__znver3__" : (undefined) 00:02:22.108 Fetching value of define "__znver4__" : (undefined) 00:02:22.108 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:22.108 Message: lib/log: Defining dependency "log" 00:02:22.108 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.108 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.108 Checking for function "getentropy" : NO 00:02:22.108 Message: lib/eal: Defining dependency "eal" 00:02:22.108 Message: lib/ring: Defining dependency "ring" 00:02:22.108 Message: lib/rcu: Defining dependency "rcu" 00:02:22.108 Message: lib/mempool: Defining dependency "mempool" 00:02:22.108 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.108 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.108 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:22.108 Compiler for C supports arguments -mpclmul: YES 00:02:22.108 Compiler for C supports arguments -maes: YES 00:02:22.108 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.108 Compiler for C supports arguments -mavx512bw: YES 00:02:22.108 Compiler for C supports arguments -mavx512dq: YES 00:02:22.108 Compiler for C supports arguments -mavx512vl: YES 00:02:22.108 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.108 Compiler for C supports arguments -mavx2: YES 00:02:22.108 Compiler for C supports arguments -mavx: YES 00:02:22.108 Message: lib/net: Defining dependency "net" 00:02:22.108 Message: lib/meter: Defining dependency "meter" 00:02:22.108 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.108 Message: lib/pci: Defining dependency "pci" 00:02:22.108 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.108 Message: lib/hash: Defining dependency "hash" 00:02:22.108 Message: lib/timer: Defining dependency "timer" 00:02:22.108 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.108 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.108 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.108 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.108 Message: lib/power: Defining dependency "power" 00:02:22.108 Message: lib/reorder: Defining dependency "reorder" 00:02:22.108 Message: lib/security: Defining dependency "security" 00:02:22.108 Has header "linux/userfaultfd.h" : YES 00:02:22.108 Has header "linux/vduse.h" : YES 00:02:22.108 Message: lib/vhost: Defining dependency "vhost" 00:02:22.108 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:22.108 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.108 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.108 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.108 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.108 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.108 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.108 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.108 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.108 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.108 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.108 Configuring doxy-api-html.conf using configuration 00:02:22.108 Configuring doxy-api-man.conf using configuration 00:02:22.108 Program mandb found: YES (/usr/bin/mandb) 00:02:22.108 Program sphinx-build found: NO 00:02:22.108 Configuring rte_build_config.h using configuration 00:02:22.108 Message: 00:02:22.108 ================= 00:02:22.108 Applications Enabled 00:02:22.108 ================= 00:02:22.108 00:02:22.108 apps: 00:02:22.108 00:02:22.108 00:02:22.108 Message: 00:02:22.108 ================= 00:02:22.108 Libraries Enabled 00:02:22.108 ================= 00:02:22.108 00:02:22.108 libs: 00:02:22.108 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.108 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.108 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.108 00:02:22.108 Message: 00:02:22.108 =============== 00:02:22.108 Drivers Enabled 00:02:22.108 =============== 00:02:22.108 00:02:22.108 common: 00:02:22.108 00:02:22.108 bus: 00:02:22.108 pci, vdev, 00:02:22.108 mempool: 00:02:22.108 ring, 00:02:22.108 dma: 00:02:22.108 00:02:22.108 net: 00:02:22.108 00:02:22.108 crypto: 00:02:22.108 00:02:22.108 compress: 00:02:22.108 00:02:22.108 vdpa: 00:02:22.108 00:02:22.108 00:02:22.108 Message: 00:02:22.108 ================= 00:02:22.109 Content Skipped 00:02:22.109 ================= 00:02:22.109 00:02:22.109 apps: 00:02:22.109 dumpcap: explicitly disabled via build config 00:02:22.109 graph: explicitly disabled via build config 00:02:22.109 pdump: explicitly disabled via build config 00:02:22.109 proc-info: explicitly disabled via build config 00:02:22.109 test-acl: explicitly disabled via build config 00:02:22.109 test-bbdev: explicitly disabled via build config 00:02:22.109 test-cmdline: explicitly disabled via build config 00:02:22.109 test-compress-perf: explicitly disabled via build config 00:02:22.109 test-crypto-perf: explicitly disabled via build config 00:02:22.109 test-dma-perf: explicitly disabled via build config 00:02:22.109 test-eventdev: explicitly disabled via build config 00:02:22.109 test-fib: explicitly disabled via build config 00:02:22.109 test-flow-perf: explicitly disabled via build config 00:02:22.109 test-gpudev: explicitly disabled via build config 00:02:22.109 test-mldev: explicitly disabled via build config 00:02:22.109 test-pipeline: explicitly disabled via build config 00:02:22.109 test-pmd: explicitly disabled via build config 00:02:22.109 test-regex: explicitly disabled via build config 00:02:22.109 test-sad: explicitly disabled via build config 00:02:22.109 test-security-perf: explicitly disabled via build config 00:02:22.109 00:02:22.109 libs: 00:02:22.109 argparse: explicitly disabled via build config 00:02:22.109 metrics: explicitly disabled via build config 00:02:22.109 acl: explicitly disabled via build config 00:02:22.109 bbdev: explicitly disabled via build config 00:02:22.109 bitratestats: explicitly disabled via build config 00:02:22.109 bpf: explicitly disabled via build config 00:02:22.109 cfgfile: explicitly disabled via build config 00:02:22.109 distributor: explicitly disabled via build config 00:02:22.109 efd: explicitly disabled via build config 00:02:22.109 eventdev: explicitly disabled via build config 00:02:22.109 dispatcher: explicitly disabled via build config 00:02:22.109 gpudev: explicitly disabled via build config 00:02:22.109 gro: explicitly disabled via build config 00:02:22.109 gso: explicitly disabled via build config 00:02:22.109 ip_frag: explicitly disabled via build config 00:02:22.109 jobstats: explicitly disabled via build config 00:02:22.109 latencystats: explicitly disabled via build config 00:02:22.109 lpm: explicitly disabled via build config 00:02:22.109 member: explicitly disabled via build config 00:02:22.109 pcapng: explicitly disabled via build config 00:02:22.109 rawdev: explicitly disabled via build config 00:02:22.109 regexdev: explicitly disabled via build config 00:02:22.109 mldev: explicitly disabled via build config 00:02:22.109 rib: explicitly disabled via build config 00:02:22.109 sched: explicitly disabled via build config 00:02:22.109 stack: explicitly disabled via build config 00:02:22.109 ipsec: explicitly disabled via build config 00:02:22.109 pdcp: explicitly disabled via build config 00:02:22.109 fib: explicitly disabled via build config 00:02:22.109 port: explicitly disabled via build config 00:02:22.109 pdump: explicitly disabled via build config 00:02:22.109 table: explicitly disabled via build config 00:02:22.109 pipeline: explicitly disabled via build config 00:02:22.109 graph: explicitly disabled via build config 00:02:22.109 node: explicitly disabled via build config 00:02:22.109 00:02:22.109 drivers: 00:02:22.109 common/cpt: not in enabled drivers build config 00:02:22.109 common/dpaax: not in enabled drivers build config 00:02:22.109 common/iavf: not in enabled drivers build config 00:02:22.109 common/idpf: not in enabled drivers build config 00:02:22.109 common/ionic: not in enabled drivers build config 00:02:22.109 common/mvep: not in enabled drivers build config 00:02:22.109 common/octeontx: not in enabled drivers build config 00:02:22.109 bus/auxiliary: not in enabled drivers build config 00:02:22.109 bus/cdx: not in enabled drivers build config 00:02:22.109 bus/dpaa: not in enabled drivers build config 00:02:22.109 bus/fslmc: not in enabled drivers build config 00:02:22.109 bus/ifpga: not in enabled drivers build config 00:02:22.109 bus/platform: not in enabled drivers build config 00:02:22.109 bus/uacce: not in enabled drivers build config 00:02:22.109 bus/vmbus: not in enabled drivers build config 00:02:22.109 common/cnxk: not in enabled drivers build config 00:02:22.109 common/mlx5: not in enabled drivers build config 00:02:22.109 common/nfp: not in enabled drivers build config 00:02:22.109 common/nitrox: not in enabled drivers build config 00:02:22.109 common/qat: not in enabled drivers build config 00:02:22.109 common/sfc_efx: not in enabled drivers build config 00:02:22.109 mempool/bucket: not in enabled drivers build config 00:02:22.109 mempool/cnxk: not in enabled drivers build config 00:02:22.109 mempool/dpaa: not in enabled drivers build config 00:02:22.109 mempool/dpaa2: not in enabled drivers build config 00:02:22.109 mempool/octeontx: not in enabled drivers build config 00:02:22.109 mempool/stack: not in enabled drivers build config 00:02:22.109 dma/cnxk: not in enabled drivers build config 00:02:22.109 dma/dpaa: not in enabled drivers build config 00:02:22.109 dma/dpaa2: not in enabled drivers build config 00:02:22.109 dma/hisilicon: not in enabled drivers build config 00:02:22.109 dma/idxd: not in enabled drivers build config 00:02:22.109 dma/ioat: not in enabled drivers build config 00:02:22.109 dma/skeleton: not in enabled drivers build config 00:02:22.109 net/af_packet: not in enabled drivers build config 00:02:22.109 net/af_xdp: not in enabled drivers build config 00:02:22.109 net/ark: not in enabled drivers build config 00:02:22.109 net/atlantic: not in enabled drivers build config 00:02:22.109 net/avp: not in enabled drivers build config 00:02:22.109 net/axgbe: not in enabled drivers build config 00:02:22.109 net/bnx2x: not in enabled drivers build config 00:02:22.109 net/bnxt: not in enabled drivers build config 00:02:22.109 net/bonding: not in enabled drivers build config 00:02:22.109 net/cnxk: not in enabled drivers build config 00:02:22.109 net/cpfl: not in enabled drivers build config 00:02:22.109 net/cxgbe: not in enabled drivers build config 00:02:22.109 net/dpaa: not in enabled drivers build config 00:02:22.109 net/dpaa2: not in enabled drivers build config 00:02:22.109 net/e1000: not in enabled drivers build config 00:02:22.109 net/ena: not in enabled drivers build config 00:02:22.109 net/enetc: not in enabled drivers build config 00:02:22.109 net/enetfec: not in enabled drivers build config 00:02:22.109 net/enic: not in enabled drivers build config 00:02:22.109 net/failsafe: not in enabled drivers build config 00:02:22.109 net/fm10k: not in enabled drivers build config 00:02:22.109 net/gve: not in enabled drivers build config 00:02:22.109 net/hinic: not in enabled drivers build config 00:02:22.109 net/hns3: not in enabled drivers build config 00:02:22.109 net/i40e: not in enabled drivers build config 00:02:22.109 net/iavf: not in enabled drivers build config 00:02:22.109 net/ice: not in enabled drivers build config 00:02:22.109 net/idpf: not in enabled drivers build config 00:02:22.109 net/igc: not in enabled drivers build config 00:02:22.109 net/ionic: not in enabled drivers build config 00:02:22.109 net/ipn3ke: not in enabled drivers build config 00:02:22.109 net/ixgbe: not in enabled drivers build config 00:02:22.109 net/mana: not in enabled drivers build config 00:02:22.109 net/memif: not in enabled drivers build config 00:02:22.109 net/mlx4: not in enabled drivers build config 00:02:22.109 net/mlx5: not in enabled drivers build config 00:02:22.109 net/mvneta: not in enabled drivers build config 00:02:22.109 net/mvpp2: not in enabled drivers build config 00:02:22.109 net/netvsc: not in enabled drivers build config 00:02:22.109 net/nfb: not in enabled drivers build config 00:02:22.109 net/nfp: not in enabled drivers build config 00:02:22.109 net/ngbe: not in enabled drivers build config 00:02:22.109 net/null: not in enabled drivers build config 00:02:22.109 net/octeontx: not in enabled drivers build config 00:02:22.109 net/octeon_ep: not in enabled drivers build config 00:02:22.109 net/pcap: not in enabled drivers build config 00:02:22.109 net/pfe: not in enabled drivers build config 00:02:22.109 net/qede: not in enabled drivers build config 00:02:22.109 net/ring: not in enabled drivers build config 00:02:22.109 net/sfc: not in enabled drivers build config 00:02:22.109 net/softnic: not in enabled drivers build config 00:02:22.109 net/tap: not in enabled drivers build config 00:02:22.109 net/thunderx: not in enabled drivers build config 00:02:22.109 net/txgbe: not in enabled drivers build config 00:02:22.109 net/vdev_netvsc: not in enabled drivers build config 00:02:22.109 net/vhost: not in enabled drivers build config 00:02:22.109 net/virtio: not in enabled drivers build config 00:02:22.109 net/vmxnet3: not in enabled drivers build config 00:02:22.109 raw/*: missing internal dependency, "rawdev" 00:02:22.109 crypto/armv8: not in enabled drivers build config 00:02:22.109 crypto/bcmfs: not in enabled drivers build config 00:02:22.109 crypto/caam_jr: not in enabled drivers build config 00:02:22.109 crypto/ccp: not in enabled drivers build config 00:02:22.109 crypto/cnxk: not in enabled drivers build config 00:02:22.109 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.109 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.109 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.109 crypto/mlx5: not in enabled drivers build config 00:02:22.109 crypto/mvsam: not in enabled drivers build config 00:02:22.109 crypto/nitrox: not in enabled drivers build config 00:02:22.109 crypto/null: not in enabled drivers build config 00:02:22.109 crypto/octeontx: not in enabled drivers build config 00:02:22.109 crypto/openssl: not in enabled drivers build config 00:02:22.109 crypto/scheduler: not in enabled drivers build config 00:02:22.109 crypto/uadk: not in enabled drivers build config 00:02:22.109 crypto/virtio: not in enabled drivers build config 00:02:22.109 compress/isal: not in enabled drivers build config 00:02:22.109 compress/mlx5: not in enabled drivers build config 00:02:22.109 compress/nitrox: not in enabled drivers build config 00:02:22.109 compress/octeontx: not in enabled drivers build config 00:02:22.109 compress/zlib: not in enabled drivers build config 00:02:22.109 regex/*: missing internal dependency, "regexdev" 00:02:22.109 ml/*: missing internal dependency, "mldev" 00:02:22.109 vdpa/ifc: not in enabled drivers build config 00:02:22.109 vdpa/mlx5: not in enabled drivers build config 00:02:22.109 vdpa/nfp: not in enabled drivers build config 00:02:22.109 vdpa/sfc: not in enabled drivers build config 00:02:22.109 event/*: missing internal dependency, "eventdev" 00:02:22.109 baseband/*: missing internal dependency, "bbdev" 00:02:22.109 gpu/*: missing internal dependency, "gpudev" 00:02:22.109 00:02:22.109 00:02:22.109 Build targets in project: 85 00:02:22.109 00:02:22.109 DPDK 24.03.0 00:02:22.110 00:02:22.110 User defined options 00:02:22.110 buildtype : debug 00:02:22.110 default_library : shared 00:02:22.110 libdir : lib 00:02:22.110 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.110 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:22.110 c_link_args : 00:02:22.110 cpu_instruction_set: native 00:02:22.110 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:22.110 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:22.110 enable_docs : false 00:02:22.110 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:22.110 enable_kmods : false 00:02:22.110 max_lcores : 128 00:02:22.110 tests : false 00:02:22.110 00:02:22.110 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.110 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:22.110 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.110 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.110 [3/268] Linking static target lib/librte_kvargs.a 00:02:22.110 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.110 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:22.369 [6/268] Linking static target lib/librte_log.a 00:02:22.628 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.886 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.886 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.886 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.886 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.886 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.145 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.145 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.145 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.145 [16/268] Linking static target lib/librte_telemetry.a 00:02:23.145 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.145 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.145 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.404 [20/268] Linking target lib/librte_log.so.24.1 00:02:23.404 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:23.663 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:23.663 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:23.663 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.922 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.922 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:23.922 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.922 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:23.922 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.207 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.207 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.207 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.207 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.207 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.487 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.487 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.487 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:24.746 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.746 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.746 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:24.746 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.006 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.006 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.006 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.006 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.006 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.265 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.265 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.525 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.525 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.525 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.784 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.784 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.043 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.043 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.043 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.043 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.043 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.301 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.301 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.301 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.559 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.559 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.818 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.818 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.077 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.077 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.336 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.336 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:27.336 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.336 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.336 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.336 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.631 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.631 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.631 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.631 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.631 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:27.893 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.893 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.893 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.151 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.151 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:28.151 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.151 [85/268] Linking static target lib/librte_eal.a 00:02:28.410 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:28.410 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.410 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.410 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:28.410 [90/268] Linking static target lib/librte_rcu.a 00:02:28.668 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.668 [92/268] Linking static target lib/librte_ring.a 00:02:28.668 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.927 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:28.927 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.927 [96/268] Linking static target lib/librte_mempool.a 00:02:28.927 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.927 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.927 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.927 [100/268] Linking static target lib/librte_mbuf.a 00:02:28.927 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.186 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.186 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.445 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.445 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:29.445 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:29.445 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.703 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.703 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:29.703 [110/268] Linking static target lib/librte_meter.a 00:02:29.703 [111/268] Linking static target lib/librte_net.a 00:02:29.962 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.962 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.220 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.220 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.220 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.220 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.220 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.479 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.737 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.737 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.737 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:31.303 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:31.303 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.303 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:31.303 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:31.303 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.562 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.562 [129/268] Linking static target lib/librte_pci.a 00:02:31.562 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:31.562 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:31.562 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.562 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.562 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:31.562 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.562 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.562 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.820 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.820 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.820 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.820 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.820 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.820 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:31.820 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.079 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.079 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.079 [147/268] Linking static target lib/librte_cmdline.a 00:02:32.079 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.079 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.338 [150/268] Linking static target lib/librte_ethdev.a 00:02:32.595 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:32.595 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.595 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.595 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.853 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.853 [156/268] Linking static target lib/librte_hash.a 00:02:33.112 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.112 [158/268] Linking static target lib/librte_timer.a 00:02:33.112 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.112 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.371 [161/268] Linking static target lib/librte_compressdev.a 00:02:33.371 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:33.371 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:33.371 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:33.629 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.629 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.887 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:33.887 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:33.887 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:33.887 [170/268] Linking static target lib/librte_dmadev.a 00:02:33.887 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:33.887 [172/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.887 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:33.887 [174/268] Linking static target lib/librte_cryptodev.a 00:02:33.887 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.145 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.145 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.404 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:34.662 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:34.662 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:34.662 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.662 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.920 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:34.920 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.178 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.178 [186/268] Linking static target lib/librte_power.a 00:02:35.178 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.178 [188/268] Linking static target lib/librte_reorder.a 00:02:35.436 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.436 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.436 [191/268] Linking static target lib/librte_security.a 00:02:35.436 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.436 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.003 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.003 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.263 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.521 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.521 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.521 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.521 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.521 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.780 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:37.039 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.039 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.039 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.298 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.298 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.298 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.298 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.557 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.557 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.557 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.557 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.557 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.557 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.815 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.815 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:37.815 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.815 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.815 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:37.815 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.815 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.073 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.073 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.073 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.073 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.073 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:38.330 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.589 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:38.589 [230/268] Linking static target lib/librte_vhost.a 00:02:39.521 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.521 [232/268] Linking target lib/librte_eal.so.24.1 00:02:39.521 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:39.521 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:39.780 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:39.780 [236/268] Linking target lib/librte_ring.so.24.1 00:02:39.780 [237/268] Linking target lib/librte_timer.so.24.1 00:02:39.780 [238/268] Linking target lib/librte_pci.so.24.1 00:02:39.780 [239/268] Linking target lib/librte_meter.so.24.1 00:02:39.780 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:39.780 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:39.780 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:39.780 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:39.780 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:39.780 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:39.780 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:39.780 [247/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.038 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.038 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:40.038 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.038 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:40.038 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:40.296 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:40.296 [254/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.296 [255/268] Linking target lib/librte_net.so.24.1 00:02:40.296 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:40.296 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:40.296 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:40.296 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:40.296 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:40.554 [261/268] Linking target lib/librte_hash.so.24.1 00:02:40.554 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:40.554 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:40.554 [264/268] Linking target lib/librte_security.so.24.1 00:02:40.554 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.554 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:40.554 [267/268] Linking target lib/librte_power.so.24.1 00:02:40.554 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:40.554 INFO: autodetecting backend as ninja 00:02:40.554 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:07.189 CC lib/ut_mock/mock.o 00:03:07.189 CC lib/log/log.o 00:03:07.189 CC lib/log/log_deprecated.o 00:03:07.189 CC lib/log/log_flags.o 00:03:07.189 CC lib/ut/ut.o 00:03:07.189 LIB libspdk_ut_mock.a 00:03:07.189 LIB libspdk_log.a 00:03:07.189 LIB libspdk_ut.a 00:03:07.189 SO libspdk_ut_mock.so.6.0 00:03:07.189 SO libspdk_ut.so.2.0 00:03:07.189 SO libspdk_log.so.7.1 00:03:07.189 SYMLINK libspdk_ut_mock.so 00:03:07.189 SYMLINK libspdk_ut.so 00:03:07.189 SYMLINK libspdk_log.so 00:03:07.189 CXX lib/trace_parser/trace.o 00:03:07.189 CC lib/util/base64.o 00:03:07.189 CC lib/util/bit_array.o 00:03:07.189 CC lib/util/cpuset.o 00:03:07.189 CC lib/ioat/ioat.o 00:03:07.189 CC lib/util/crc16.o 00:03:07.189 CC lib/util/crc32.o 00:03:07.189 CC lib/util/crc32c.o 00:03:07.189 CC lib/dma/dma.o 00:03:07.189 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.189 CC lib/util/crc32_ieee.o 00:03:07.189 CC lib/vfio_user/host/vfio_user.o 00:03:07.189 CC lib/util/crc64.o 00:03:07.448 CC lib/util/dif.o 00:03:07.448 LIB libspdk_ioat.a 00:03:07.448 LIB libspdk_dma.a 00:03:07.448 CC lib/util/fd.o 00:03:07.448 SO libspdk_ioat.so.7.0 00:03:07.448 SO libspdk_dma.so.5.0 00:03:07.448 CC lib/util/fd_group.o 00:03:07.448 CC lib/util/file.o 00:03:07.448 SYMLINK libspdk_ioat.so 00:03:07.448 CC lib/util/hexlify.o 00:03:07.448 SYMLINK libspdk_dma.so 00:03:07.448 CC lib/util/iov.o 00:03:07.448 CC lib/util/math.o 00:03:07.448 CC lib/util/net.o 00:03:07.448 CC lib/util/pipe.o 00:03:07.705 LIB libspdk_vfio_user.a 00:03:07.705 CC lib/util/strerror_tls.o 00:03:07.705 SO libspdk_vfio_user.so.5.0 00:03:07.705 CC lib/util/string.o 00:03:07.705 SYMLINK libspdk_vfio_user.so 00:03:07.705 CC lib/util/uuid.o 00:03:07.705 CC lib/util/xor.o 00:03:07.705 CC lib/util/zipf.o 00:03:07.705 CC lib/util/md5.o 00:03:08.274 LIB libspdk_trace_parser.a 00:03:08.274 LIB libspdk_util.a 00:03:08.274 SO libspdk_trace_parser.so.6.0 00:03:08.274 SO libspdk_util.so.10.1 00:03:08.274 SYMLINK libspdk_trace_parser.so 00:03:08.533 SYMLINK libspdk_util.so 00:03:08.792 CC lib/idxd/idxd.o 00:03:08.792 CC lib/idxd/idxd_user.o 00:03:08.792 CC lib/idxd/idxd_kernel.o 00:03:08.792 CC lib/vmd/vmd.o 00:03:08.792 CC lib/vmd/led.o 00:03:08.792 CC lib/env_dpdk/env.o 00:03:08.792 CC lib/conf/conf.o 00:03:08.792 CC lib/rdma_utils/rdma_utils.o 00:03:08.792 CC lib/rdma_provider/common.o 00:03:08.792 CC lib/json/json_parse.o 00:03:08.792 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:09.051 CC lib/json/json_util.o 00:03:09.051 CC lib/json/json_write.o 00:03:09.051 LIB libspdk_conf.a 00:03:09.051 CC lib/env_dpdk/memory.o 00:03:09.051 CC lib/env_dpdk/pci.o 00:03:09.051 LIB libspdk_rdma_provider.a 00:03:09.051 SO libspdk_conf.so.6.0 00:03:09.309 SO libspdk_rdma_provider.so.6.0 00:03:09.309 LIB libspdk_rdma_utils.a 00:03:09.309 SYMLINK libspdk_conf.so 00:03:09.309 CC lib/env_dpdk/init.o 00:03:09.309 SO libspdk_rdma_utils.so.1.0 00:03:09.309 SYMLINK libspdk_rdma_provider.so 00:03:09.309 CC lib/env_dpdk/threads.o 00:03:09.309 SYMLINK libspdk_rdma_utils.so 00:03:09.309 CC lib/env_dpdk/pci_ioat.o 00:03:09.309 CC lib/env_dpdk/pci_virtio.o 00:03:09.567 LIB libspdk_json.a 00:03:09.567 CC lib/env_dpdk/pci_vmd.o 00:03:09.567 SO libspdk_json.so.6.0 00:03:09.567 CC lib/env_dpdk/pci_idxd.o 00:03:09.567 CC lib/env_dpdk/pci_event.o 00:03:09.825 SYMLINK libspdk_json.so 00:03:09.825 CC lib/env_dpdk/sigbus_handler.o 00:03:09.825 CC lib/env_dpdk/pci_dpdk.o 00:03:09.825 LIB libspdk_idxd.a 00:03:09.825 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:09.825 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.825 SO libspdk_idxd.so.12.1 00:03:09.825 LIB libspdk_vmd.a 00:03:09.825 SYMLINK libspdk_idxd.so 00:03:09.825 SO libspdk_vmd.so.6.0 00:03:10.083 SYMLINK libspdk_vmd.so 00:03:10.083 CC lib/jsonrpc/jsonrpc_server.o 00:03:10.083 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:10.083 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.083 CC lib/jsonrpc/jsonrpc_client.o 00:03:10.341 LIB libspdk_env_dpdk.a 00:03:10.341 LIB libspdk_jsonrpc.a 00:03:10.599 SO libspdk_jsonrpc.so.6.0 00:03:10.599 SO libspdk_env_dpdk.so.15.1 00:03:10.599 SYMLINK libspdk_jsonrpc.so 00:03:10.879 SYMLINK libspdk_env_dpdk.so 00:03:10.879 CC lib/rpc/rpc.o 00:03:11.191 LIB libspdk_rpc.a 00:03:11.191 SO libspdk_rpc.so.6.0 00:03:11.191 SYMLINK libspdk_rpc.so 00:03:11.449 CC lib/trace/trace.o 00:03:11.449 CC lib/trace/trace_flags.o 00:03:11.449 CC lib/trace/trace_rpc.o 00:03:11.449 CC lib/keyring/keyring.o 00:03:11.449 CC lib/keyring/keyring_rpc.o 00:03:11.449 CC lib/notify/notify.o 00:03:11.449 CC lib/notify/notify_rpc.o 00:03:11.708 LIB libspdk_notify.a 00:03:11.708 LIB libspdk_keyring.a 00:03:11.708 SO libspdk_notify.so.6.0 00:03:11.708 SO libspdk_keyring.so.2.0 00:03:11.708 LIB libspdk_trace.a 00:03:11.708 SYMLINK libspdk_notify.so 00:03:11.708 SYMLINK libspdk_keyring.so 00:03:11.708 SO libspdk_trace.so.11.0 00:03:11.967 SYMLINK libspdk_trace.so 00:03:12.225 CC lib/thread/thread.o 00:03:12.225 CC lib/sock/sock_rpc.o 00:03:12.225 CC lib/sock/sock.o 00:03:12.225 CC lib/thread/iobuf.o 00:03:12.490 LIB libspdk_sock.a 00:03:12.748 SO libspdk_sock.so.10.0 00:03:12.748 SYMLINK libspdk_sock.so 00:03:13.008 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.008 CC lib/nvme/nvme_ctrlr.o 00:03:13.008 CC lib/nvme/nvme_fabric.o 00:03:13.008 CC lib/nvme/nvme_ns_cmd.o 00:03:13.008 CC lib/nvme/nvme_pcie_common.o 00:03:13.008 CC lib/nvme/nvme_ns.o 00:03:13.008 CC lib/nvme/nvme_pcie.o 00:03:13.008 CC lib/nvme/nvme_qpair.o 00:03:13.008 CC lib/nvme/nvme.o 00:03:13.944 CC lib/nvme/nvme_quirks.o 00:03:13.944 CC lib/nvme/nvme_transport.o 00:03:13.944 CC lib/nvme/nvme_discovery.o 00:03:13.944 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:13.944 LIB libspdk_thread.a 00:03:13.944 SO libspdk_thread.so.11.0 00:03:14.203 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.203 SYMLINK libspdk_thread.so 00:03:14.203 CC lib/nvme/nvme_tcp.o 00:03:14.203 CC lib/accel/accel.o 00:03:14.461 CC lib/blob/blobstore.o 00:03:14.461 CC lib/init/json_config.o 00:03:14.720 CC lib/init/subsystem.o 00:03:14.720 CC lib/init/subsystem_rpc.o 00:03:14.720 CC lib/blob/request.o 00:03:14.720 CC lib/blob/zeroes.o 00:03:14.720 CC lib/nvme/nvme_opal.o 00:03:14.978 CC lib/init/rpc.o 00:03:14.979 CC lib/blob/blob_bs_dev.o 00:03:14.979 CC lib/nvme/nvme_io_msg.o 00:03:15.238 CC lib/virtio/virtio.o 00:03:15.238 CC lib/fsdev/fsdev.o 00:03:15.238 LIB libspdk_init.a 00:03:15.238 SO libspdk_init.so.6.0 00:03:15.238 SYMLINK libspdk_init.so 00:03:15.238 CC lib/virtio/virtio_vhost_user.o 00:03:15.497 CC lib/virtio/virtio_vfio_user.o 00:03:15.497 CC lib/nvme/nvme_poll_group.o 00:03:15.497 CC lib/accel/accel_rpc.o 00:03:15.497 CC lib/accel/accel_sw.o 00:03:15.756 CC lib/virtio/virtio_pci.o 00:03:15.756 CC lib/event/app.o 00:03:15.756 CC lib/fsdev/fsdev_io.o 00:03:15.756 CC lib/fsdev/fsdev_rpc.o 00:03:15.756 CC lib/nvme/nvme_zns.o 00:03:15.756 CC lib/nvme/nvme_stubs.o 00:03:15.756 CC lib/nvme/nvme_auth.o 00:03:16.015 CC lib/nvme/nvme_cuse.o 00:03:16.015 LIB libspdk_virtio.a 00:03:16.015 SO libspdk_virtio.so.7.0 00:03:16.015 LIB libspdk_accel.a 00:03:16.015 CC lib/event/reactor.o 00:03:16.274 LIB libspdk_fsdev.a 00:03:16.274 SYMLINK libspdk_virtio.so 00:03:16.274 CC lib/nvme/nvme_rdma.o 00:03:16.274 SO libspdk_accel.so.16.0 00:03:16.274 SO libspdk_fsdev.so.2.0 00:03:16.274 CC lib/event/log_rpc.o 00:03:16.274 SYMLINK libspdk_accel.so 00:03:16.274 CC lib/event/app_rpc.o 00:03:16.274 SYMLINK libspdk_fsdev.so 00:03:16.562 CC lib/event/scheduler_static.o 00:03:16.562 CC lib/bdev/bdev.o 00:03:16.562 CC lib/bdev/bdev_rpc.o 00:03:16.562 CC lib/bdev/bdev_zone.o 00:03:16.563 CC lib/bdev/part.o 00:03:16.563 LIB libspdk_event.a 00:03:16.563 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:16.858 SO libspdk_event.so.14.0 00:03:16.858 CC lib/bdev/scsi_nvme.o 00:03:16.858 SYMLINK libspdk_event.so 00:03:17.425 LIB libspdk_fuse_dispatcher.a 00:03:17.425 SO libspdk_fuse_dispatcher.so.1.0 00:03:17.425 SYMLINK libspdk_fuse_dispatcher.so 00:03:17.425 LIB libspdk_nvme.a 00:03:17.683 LIB libspdk_blob.a 00:03:17.683 SO libspdk_nvme.so.15.0 00:03:17.683 SO libspdk_blob.so.11.0 00:03:17.941 SYMLINK libspdk_blob.so 00:03:17.941 SYMLINK libspdk_nvme.so 00:03:18.200 CC lib/lvol/lvol.o 00:03:18.200 CC lib/blobfs/tree.o 00:03:18.200 CC lib/blobfs/blobfs.o 00:03:19.136 LIB libspdk_blobfs.a 00:03:19.136 SO libspdk_blobfs.so.10.0 00:03:19.136 LIB libspdk_lvol.a 00:03:19.136 SYMLINK libspdk_blobfs.so 00:03:19.136 SO libspdk_lvol.so.10.0 00:03:19.136 SYMLINK libspdk_lvol.so 00:03:19.136 LIB libspdk_bdev.a 00:03:19.410 SO libspdk_bdev.so.17.0 00:03:19.410 SYMLINK libspdk_bdev.so 00:03:19.669 CC lib/ftl/ftl_core.o 00:03:19.669 CC lib/ftl/ftl_init.o 00:03:19.669 CC lib/ftl/ftl_layout.o 00:03:19.669 CC lib/ftl/ftl_debug.o 00:03:19.669 CC lib/ftl/ftl_io.o 00:03:19.669 CC lib/ftl/ftl_sb.o 00:03:19.669 CC lib/nvmf/ctrlr.o 00:03:19.669 CC lib/nbd/nbd.o 00:03:19.669 CC lib/scsi/dev.o 00:03:19.669 CC lib/ublk/ublk.o 00:03:19.928 CC lib/nvmf/ctrlr_discovery.o 00:03:19.928 CC lib/nvmf/ctrlr_bdev.o 00:03:19.928 CC lib/scsi/lun.o 00:03:19.928 CC lib/scsi/port.o 00:03:19.928 CC lib/scsi/scsi.o 00:03:19.928 CC lib/nvmf/subsystem.o 00:03:19.928 CC lib/ftl/ftl_l2p.o 00:03:20.187 CC lib/nbd/nbd_rpc.o 00:03:20.187 CC lib/nvmf/nvmf.o 00:03:20.187 CC lib/nvmf/nvmf_rpc.o 00:03:20.187 CC lib/scsi/scsi_bdev.o 00:03:20.187 LIB libspdk_nbd.a 00:03:20.187 CC lib/ftl/ftl_l2p_flat.o 00:03:20.187 SO libspdk_nbd.so.7.0 00:03:20.446 CC lib/ublk/ublk_rpc.o 00:03:20.446 SYMLINK libspdk_nbd.so 00:03:20.446 CC lib/scsi/scsi_pr.o 00:03:20.446 CC lib/scsi/scsi_rpc.o 00:03:20.446 LIB libspdk_ublk.a 00:03:20.446 CC lib/ftl/ftl_nv_cache.o 00:03:20.446 CC lib/scsi/task.o 00:03:20.446 SO libspdk_ublk.so.3.0 00:03:20.705 SYMLINK libspdk_ublk.so 00:03:20.705 CC lib/nvmf/transport.o 00:03:20.705 CC lib/nvmf/tcp.o 00:03:20.705 CC lib/nvmf/stubs.o 00:03:20.705 CC lib/ftl/ftl_band.o 00:03:20.964 LIB libspdk_scsi.a 00:03:20.964 CC lib/nvmf/mdns_server.o 00:03:20.964 CC lib/nvmf/rdma.o 00:03:20.964 SO libspdk_scsi.so.9.0 00:03:21.223 SYMLINK libspdk_scsi.so 00:03:21.223 CC lib/ftl/ftl_band_ops.o 00:03:21.223 CC lib/nvmf/auth.o 00:03:21.223 CC lib/ftl/ftl_writer.o 00:03:21.481 CC lib/iscsi/conn.o 00:03:21.481 CC lib/iscsi/init_grp.o 00:03:21.481 CC lib/iscsi/iscsi.o 00:03:21.481 CC lib/ftl/ftl_rq.o 00:03:21.481 CC lib/ftl/ftl_reloc.o 00:03:21.481 CC lib/ftl/ftl_l2p_cache.o 00:03:21.740 CC lib/vhost/vhost.o 00:03:21.740 CC lib/vhost/vhost_rpc.o 00:03:21.740 CC lib/iscsi/param.o 00:03:21.998 CC lib/ftl/ftl_p2l.o 00:03:21.998 CC lib/ftl/ftl_p2l_log.o 00:03:21.998 CC lib/iscsi/portal_grp.o 00:03:22.257 CC lib/vhost/vhost_scsi.o 00:03:22.257 CC lib/ftl/mngt/ftl_mngt.o 00:03:22.257 CC lib/vhost/vhost_blk.o 00:03:22.257 CC lib/iscsi/tgt_node.o 00:03:22.515 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:22.515 CC lib/vhost/rte_vhost_user.o 00:03:22.515 CC lib/iscsi/iscsi_subsystem.o 00:03:22.515 CC lib/iscsi/iscsi_rpc.o 00:03:22.515 CC lib/iscsi/task.o 00:03:22.774 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.774 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.774 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:22.774 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.042 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.042 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:23.042 LIB libspdk_iscsi.a 00:03:23.042 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:23.042 SO libspdk_iscsi.so.8.0 00:03:23.042 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:23.320 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:23.320 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:23.320 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:23.320 CC lib/ftl/utils/ftl_conf.o 00:03:23.320 LIB libspdk_nvmf.a 00:03:23.320 SYMLINK libspdk_iscsi.so 00:03:23.320 CC lib/ftl/utils/ftl_md.o 00:03:23.320 CC lib/ftl/utils/ftl_mempool.o 00:03:23.320 SO libspdk_nvmf.so.20.0 00:03:23.320 CC lib/ftl/utils/ftl_bitmap.o 00:03:23.579 CC lib/ftl/utils/ftl_property.o 00:03:23.579 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.579 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.579 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.579 LIB libspdk_vhost.a 00:03:23.579 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.579 SYMLINK libspdk_nvmf.so 00:03:23.579 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.579 SO libspdk_vhost.so.8.0 00:03:23.579 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.579 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:23.579 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.579 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.838 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.838 SYMLINK libspdk_vhost.so 00:03:23.838 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.838 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:23.838 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:23.838 CC lib/ftl/base/ftl_base_dev.o 00:03:23.838 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.838 CC lib/ftl/ftl_trace.o 00:03:24.097 LIB libspdk_ftl.a 00:03:24.355 SO libspdk_ftl.so.9.0 00:03:24.922 SYMLINK libspdk_ftl.so 00:03:25.180 CC module/env_dpdk/env_dpdk_rpc.o 00:03:25.180 CC module/blob/bdev/blob_bdev.o 00:03:25.180 CC module/sock/posix/posix.o 00:03:25.180 CC module/keyring/file/keyring.o 00:03:25.181 CC module/accel/iaa/accel_iaa.o 00:03:25.181 CC module/accel/ioat/accel_ioat.o 00:03:25.181 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:25.181 CC module/accel/error/accel_error.o 00:03:25.181 CC module/accel/dsa/accel_dsa.o 00:03:25.181 CC module/fsdev/aio/fsdev_aio.o 00:03:25.181 LIB libspdk_env_dpdk_rpc.a 00:03:25.439 SO libspdk_env_dpdk_rpc.so.6.0 00:03:25.440 SYMLINK libspdk_env_dpdk_rpc.so 00:03:25.440 CC module/accel/ioat/accel_ioat_rpc.o 00:03:25.440 CC module/keyring/file/keyring_rpc.o 00:03:25.440 CC module/accel/error/accel_error_rpc.o 00:03:25.440 LIB libspdk_scheduler_dynamic.a 00:03:25.440 CC module/accel/iaa/accel_iaa_rpc.o 00:03:25.440 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.440 LIB libspdk_blob_bdev.a 00:03:25.440 LIB libspdk_accel_ioat.a 00:03:25.440 SO libspdk_blob_bdev.so.11.0 00:03:25.699 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.699 SO libspdk_accel_ioat.so.6.0 00:03:25.699 LIB libspdk_keyring_file.a 00:03:25.699 LIB libspdk_accel_error.a 00:03:25.699 SYMLINK libspdk_blob_bdev.so 00:03:25.699 SO libspdk_keyring_file.so.2.0 00:03:25.699 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:25.699 SO libspdk_accel_error.so.2.0 00:03:25.699 LIB libspdk_accel_iaa.a 00:03:25.699 SYMLINK libspdk_accel_ioat.so 00:03:25.699 SO libspdk_accel_iaa.so.3.0 00:03:25.699 CC module/sock/uring/uring.o 00:03:25.699 CC module/fsdev/aio/linux_aio_mgr.o 00:03:25.699 CC module/accel/dsa/accel_dsa_rpc.o 00:03:25.699 SYMLINK libspdk_accel_error.so 00:03:25.699 SYMLINK libspdk_keyring_file.so 00:03:25.699 SYMLINK libspdk_accel_iaa.so 00:03:25.699 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:25.958 LIB libspdk_accel_dsa.a 00:03:25.958 SO libspdk_accel_dsa.so.5.0 00:03:25.958 CC module/keyring/linux/keyring.o 00:03:25.958 LIB libspdk_scheduler_dpdk_governor.a 00:03:25.958 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.958 CC module/keyring/linux/keyring_rpc.o 00:03:25.958 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:25.958 SYMLINK libspdk_accel_dsa.so 00:03:25.958 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:26.217 LIB libspdk_fsdev_aio.a 00:03:26.217 CC module/bdev/delay/vbdev_delay.o 00:03:26.217 LIB libspdk_keyring_linux.a 00:03:26.217 CC module/blobfs/bdev/blobfs_bdev.o 00:03:26.217 SO libspdk_fsdev_aio.so.1.0 00:03:26.217 SO libspdk_keyring_linux.so.1.0 00:03:26.217 LIB libspdk_sock_posix.a 00:03:26.217 CC module/bdev/error/vbdev_error.o 00:03:26.217 LIB libspdk_scheduler_gscheduler.a 00:03:26.217 SYMLINK libspdk_keyring_linux.so 00:03:26.217 SO libspdk_sock_posix.so.6.0 00:03:26.217 SYMLINK libspdk_fsdev_aio.so 00:03:26.217 CC module/bdev/gpt/gpt.o 00:03:26.217 SO libspdk_scheduler_gscheduler.so.4.0 00:03:26.217 CC module/bdev/lvol/vbdev_lvol.o 00:03:26.475 SYMLINK libspdk_scheduler_gscheduler.so 00:03:26.475 SYMLINK libspdk_sock_posix.so 00:03:26.475 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:26.475 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:26.475 LIB libspdk_sock_uring.a 00:03:26.475 SO libspdk_sock_uring.so.5.0 00:03:26.475 CC module/bdev/malloc/bdev_malloc.o 00:03:26.475 CC module/bdev/null/bdev_null.o 00:03:26.475 CC module/bdev/gpt/vbdev_gpt.o 00:03:26.475 SYMLINK libspdk_sock_uring.so 00:03:26.475 CC module/bdev/error/vbdev_error_rpc.o 00:03:26.475 CC module/bdev/nvme/bdev_nvme.o 00:03:26.475 LIB libspdk_blobfs_bdev.a 00:03:26.734 SO libspdk_blobfs_bdev.so.6.0 00:03:26.734 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:26.734 CC module/bdev/passthru/vbdev_passthru.o 00:03:26.734 LIB libspdk_bdev_error.a 00:03:26.734 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:26.734 SYMLINK libspdk_blobfs_bdev.so 00:03:26.734 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:26.734 CC module/bdev/null/bdev_null_rpc.o 00:03:26.734 LIB libspdk_bdev_gpt.a 00:03:26.734 SO libspdk_bdev_error.so.6.0 00:03:26.734 SO libspdk_bdev_gpt.so.6.0 00:03:27.001 SYMLINK libspdk_bdev_error.so 00:03:27.001 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:27.001 LIB libspdk_bdev_lvol.a 00:03:27.001 CC module/bdev/nvme/nvme_rpc.o 00:03:27.001 SYMLINK libspdk_bdev_gpt.so 00:03:27.001 SO libspdk_bdev_lvol.so.6.0 00:03:27.001 LIB libspdk_bdev_malloc.a 00:03:27.001 LIB libspdk_bdev_delay.a 00:03:27.001 LIB libspdk_bdev_null.a 00:03:27.001 SO libspdk_bdev_malloc.so.6.0 00:03:27.001 SO libspdk_bdev_delay.so.6.0 00:03:27.001 SYMLINK libspdk_bdev_lvol.so 00:03:27.001 SO libspdk_bdev_null.so.6.0 00:03:27.001 SYMLINK libspdk_bdev_malloc.so 00:03:27.001 LIB libspdk_bdev_passthru.a 00:03:27.001 CC module/bdev/nvme/bdev_mdns_client.o 00:03:27.001 SYMLINK libspdk_bdev_delay.so 00:03:27.001 SYMLINK libspdk_bdev_null.so 00:03:27.001 CC module/bdev/nvme/vbdev_opal.o 00:03:27.001 SO libspdk_bdev_passthru.so.6.0 00:03:27.001 CC module/bdev/raid/bdev_raid.o 00:03:27.260 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:27.260 CC module/bdev/split/vbdev_split.o 00:03:27.260 SYMLINK libspdk_bdev_passthru.so 00:03:27.260 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.260 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:27.260 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:27.260 CC module/bdev/uring/bdev_uring.o 00:03:27.260 CC module/bdev/uring/bdev_uring_rpc.o 00:03:27.520 CC module/bdev/raid/bdev_raid_rpc.o 00:03:27.520 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.520 CC module/bdev/raid/raid0.o 00:03:27.520 CC module/bdev/raid/raid1.o 00:03:27.520 CC module/bdev/raid/concat.o 00:03:27.520 LIB libspdk_bdev_split.a 00:03:27.520 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:27.520 SO libspdk_bdev_split.so.6.0 00:03:27.778 SYMLINK libspdk_bdev_split.so 00:03:27.778 LIB libspdk_bdev_zone_block.a 00:03:27.778 SO libspdk_bdev_zone_block.so.6.0 00:03:27.778 LIB libspdk_bdev_uring.a 00:03:27.778 SO libspdk_bdev_uring.so.6.0 00:03:27.778 CC module/bdev/aio/bdev_aio.o 00:03:27.778 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.040 SYMLINK libspdk_bdev_zone_block.so 00:03:28.040 CC module/bdev/ftl/bdev_ftl.o 00:03:28.040 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.040 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.040 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.040 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.040 SYMLINK libspdk_bdev_uring.so 00:03:28.040 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.299 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.299 LIB libspdk_bdev_raid.a 00:03:28.299 LIB libspdk_bdev_ftl.a 00:03:28.299 SO libspdk_bdev_raid.so.6.0 00:03:28.299 LIB libspdk_bdev_aio.a 00:03:28.299 SO libspdk_bdev_ftl.so.6.0 00:03:28.299 SO libspdk_bdev_aio.so.6.0 00:03:28.299 LIB libspdk_bdev_iscsi.a 00:03:28.299 SYMLINK libspdk_bdev_ftl.so 00:03:28.299 SYMLINK libspdk_bdev_raid.so 00:03:28.299 SO libspdk_bdev_iscsi.so.6.0 00:03:28.299 SYMLINK libspdk_bdev_aio.so 00:03:28.299 SYMLINK libspdk_bdev_iscsi.so 00:03:28.558 LIB libspdk_bdev_virtio.a 00:03:28.558 SO libspdk_bdev_virtio.so.6.0 00:03:28.558 SYMLINK libspdk_bdev_virtio.so 00:03:29.126 LIB libspdk_bdev_nvme.a 00:03:29.385 SO libspdk_bdev_nvme.so.7.1 00:03:29.385 SYMLINK libspdk_bdev_nvme.so 00:03:29.954 CC module/event/subsystems/vmd/vmd.o 00:03:29.954 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:29.954 CC module/event/subsystems/scheduler/scheduler.o 00:03:29.954 CC module/event/subsystems/keyring/keyring.o 00:03:29.954 CC module/event/subsystems/iobuf/iobuf.o 00:03:29.954 CC module/event/subsystems/fsdev/fsdev.o 00:03:29.954 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:29.954 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:29.954 CC module/event/subsystems/sock/sock.o 00:03:29.954 LIB libspdk_event_keyring.a 00:03:29.954 LIB libspdk_event_scheduler.a 00:03:29.954 SO libspdk_event_keyring.so.1.0 00:03:29.954 LIB libspdk_event_iobuf.a 00:03:29.954 LIB libspdk_event_vmd.a 00:03:29.954 SO libspdk_event_scheduler.so.4.0 00:03:29.954 SO libspdk_event_iobuf.so.3.0 00:03:29.954 SO libspdk_event_vmd.so.6.0 00:03:29.954 SYMLINK libspdk_event_keyring.so 00:03:30.213 LIB libspdk_event_fsdev.a 00:03:30.213 LIB libspdk_event_vhost_blk.a 00:03:30.213 SYMLINK libspdk_event_scheduler.so 00:03:30.213 LIB libspdk_event_sock.a 00:03:30.213 SYMLINK libspdk_event_vmd.so 00:03:30.213 SO libspdk_event_fsdev.so.1.0 00:03:30.213 SO libspdk_event_sock.so.5.0 00:03:30.213 SO libspdk_event_vhost_blk.so.3.0 00:03:30.213 SYMLINK libspdk_event_iobuf.so 00:03:30.213 SYMLINK libspdk_event_vhost_blk.so 00:03:30.213 SYMLINK libspdk_event_sock.so 00:03:30.213 SYMLINK libspdk_event_fsdev.so 00:03:30.471 CC module/event/subsystems/accel/accel.o 00:03:30.471 LIB libspdk_event_accel.a 00:03:30.471 SO libspdk_event_accel.so.6.0 00:03:30.730 SYMLINK libspdk_event_accel.so 00:03:30.987 CC module/event/subsystems/bdev/bdev.o 00:03:31.245 LIB libspdk_event_bdev.a 00:03:31.245 SO libspdk_event_bdev.so.6.0 00:03:31.245 SYMLINK libspdk_event_bdev.so 00:03:31.503 CC module/event/subsystems/ublk/ublk.o 00:03:31.503 CC module/event/subsystems/scsi/scsi.o 00:03:31.503 CC module/event/subsystems/nbd/nbd.o 00:03:31.503 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:31.503 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:31.503 LIB libspdk_event_nbd.a 00:03:31.761 LIB libspdk_event_scsi.a 00:03:31.761 SO libspdk_event_nbd.so.6.0 00:03:31.761 SO libspdk_event_scsi.so.6.0 00:03:31.761 LIB libspdk_event_ublk.a 00:03:31.761 SYMLINK libspdk_event_scsi.so 00:03:31.761 SYMLINK libspdk_event_nbd.so 00:03:31.761 SO libspdk_event_ublk.so.3.0 00:03:31.761 LIB libspdk_event_nvmf.a 00:03:31.761 SO libspdk_event_nvmf.so.6.0 00:03:31.761 SYMLINK libspdk_event_ublk.so 00:03:31.761 SYMLINK libspdk_event_nvmf.so 00:03:32.020 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:32.020 CC module/event/subsystems/iscsi/iscsi.o 00:03:32.020 LIB libspdk_event_vhost_scsi.a 00:03:32.279 SO libspdk_event_vhost_scsi.so.3.0 00:03:32.279 LIB libspdk_event_iscsi.a 00:03:32.279 SO libspdk_event_iscsi.so.6.0 00:03:32.279 SYMLINK libspdk_event_vhost_scsi.so 00:03:32.279 SYMLINK libspdk_event_iscsi.so 00:03:32.536 SO libspdk.so.6.0 00:03:32.536 SYMLINK libspdk.so 00:03:32.794 CC test/rpc_client/rpc_client_test.o 00:03:32.794 TEST_HEADER include/spdk/accel.h 00:03:32.794 TEST_HEADER include/spdk/accel_module.h 00:03:32.794 CXX app/trace/trace.o 00:03:32.794 TEST_HEADER include/spdk/assert.h 00:03:32.794 TEST_HEADER include/spdk/barrier.h 00:03:32.794 TEST_HEADER include/spdk/base64.h 00:03:32.794 TEST_HEADER include/spdk/bdev.h 00:03:32.794 TEST_HEADER include/spdk/bdev_module.h 00:03:32.794 TEST_HEADER include/spdk/bdev_zone.h 00:03:32.794 TEST_HEADER include/spdk/bit_array.h 00:03:32.794 TEST_HEADER include/spdk/bit_pool.h 00:03:32.794 TEST_HEADER include/spdk/blob_bdev.h 00:03:32.794 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:32.794 TEST_HEADER include/spdk/blobfs.h 00:03:32.794 TEST_HEADER include/spdk/blob.h 00:03:32.794 TEST_HEADER include/spdk/conf.h 00:03:32.794 TEST_HEADER include/spdk/config.h 00:03:32.794 TEST_HEADER include/spdk/cpuset.h 00:03:32.794 TEST_HEADER include/spdk/crc16.h 00:03:32.794 TEST_HEADER include/spdk/crc32.h 00:03:32.794 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.794 TEST_HEADER include/spdk/crc64.h 00:03:32.794 TEST_HEADER include/spdk/dif.h 00:03:32.794 TEST_HEADER include/spdk/dma.h 00:03:32.794 TEST_HEADER include/spdk/endian.h 00:03:32.794 TEST_HEADER include/spdk/env_dpdk.h 00:03:32.794 TEST_HEADER include/spdk/env.h 00:03:32.794 TEST_HEADER include/spdk/event.h 00:03:32.794 TEST_HEADER include/spdk/fd_group.h 00:03:32.794 TEST_HEADER include/spdk/fd.h 00:03:32.794 TEST_HEADER include/spdk/file.h 00:03:32.794 TEST_HEADER include/spdk/fsdev.h 00:03:32.794 TEST_HEADER include/spdk/fsdev_module.h 00:03:32.794 TEST_HEADER include/spdk/ftl.h 00:03:32.794 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:32.794 TEST_HEADER include/spdk/gpt_spec.h 00:03:32.794 TEST_HEADER include/spdk/hexlify.h 00:03:32.794 TEST_HEADER include/spdk/histogram_data.h 00:03:32.794 TEST_HEADER include/spdk/idxd.h 00:03:32.794 CC test/thread/poller_perf/poller_perf.o 00:03:32.794 TEST_HEADER include/spdk/idxd_spec.h 00:03:32.794 TEST_HEADER include/spdk/init.h 00:03:32.794 TEST_HEADER include/spdk/ioat.h 00:03:32.794 TEST_HEADER include/spdk/ioat_spec.h 00:03:32.794 TEST_HEADER include/spdk/iscsi_spec.h 00:03:32.794 TEST_HEADER include/spdk/json.h 00:03:32.794 TEST_HEADER include/spdk/jsonrpc.h 00:03:32.794 TEST_HEADER include/spdk/keyring.h 00:03:32.794 TEST_HEADER include/spdk/keyring_module.h 00:03:32.794 CC examples/util/zipf/zipf.o 00:03:32.794 TEST_HEADER include/spdk/likely.h 00:03:32.794 TEST_HEADER include/spdk/log.h 00:03:32.794 TEST_HEADER include/spdk/lvol.h 00:03:32.794 TEST_HEADER include/spdk/md5.h 00:03:32.794 CC examples/ioat/perf/perf.o 00:03:32.794 TEST_HEADER include/spdk/memory.h 00:03:32.794 TEST_HEADER include/spdk/mmio.h 00:03:32.794 TEST_HEADER include/spdk/nbd.h 00:03:32.794 TEST_HEADER include/spdk/net.h 00:03:32.794 TEST_HEADER include/spdk/notify.h 00:03:32.794 TEST_HEADER include/spdk/nvme.h 00:03:32.794 TEST_HEADER include/spdk/nvme_intel.h 00:03:32.794 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:32.794 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:32.794 TEST_HEADER include/spdk/nvme_spec.h 00:03:32.794 TEST_HEADER include/spdk/nvme_zns.h 00:03:32.794 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:32.794 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:32.794 TEST_HEADER include/spdk/nvmf.h 00:03:32.794 CC test/app/bdev_svc/bdev_svc.o 00:03:32.794 TEST_HEADER include/spdk/nvmf_spec.h 00:03:32.794 CC test/dma/test_dma/test_dma.o 00:03:32.794 TEST_HEADER include/spdk/nvmf_transport.h 00:03:32.795 TEST_HEADER include/spdk/opal.h 00:03:32.795 TEST_HEADER include/spdk/opal_spec.h 00:03:32.795 TEST_HEADER include/spdk/pci_ids.h 00:03:32.795 TEST_HEADER include/spdk/pipe.h 00:03:32.795 TEST_HEADER include/spdk/queue.h 00:03:32.795 TEST_HEADER include/spdk/reduce.h 00:03:32.795 TEST_HEADER include/spdk/rpc.h 00:03:32.795 TEST_HEADER include/spdk/scheduler.h 00:03:32.795 TEST_HEADER include/spdk/scsi.h 00:03:32.795 TEST_HEADER include/spdk/scsi_spec.h 00:03:32.795 TEST_HEADER include/spdk/sock.h 00:03:32.795 TEST_HEADER include/spdk/stdinc.h 00:03:32.795 TEST_HEADER include/spdk/string.h 00:03:32.795 TEST_HEADER include/spdk/thread.h 00:03:32.795 TEST_HEADER include/spdk/trace.h 00:03:32.795 TEST_HEADER include/spdk/trace_parser.h 00:03:32.795 TEST_HEADER include/spdk/tree.h 00:03:32.795 TEST_HEADER include/spdk/ublk.h 00:03:32.795 TEST_HEADER include/spdk/util.h 00:03:32.795 TEST_HEADER include/spdk/uuid.h 00:03:32.795 TEST_HEADER include/spdk/version.h 00:03:32.795 CC test/env/mem_callbacks/mem_callbacks.o 00:03:32.795 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:32.795 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:32.795 TEST_HEADER include/spdk/vhost.h 00:03:32.795 TEST_HEADER include/spdk/vmd.h 00:03:32.795 TEST_HEADER include/spdk/xor.h 00:03:32.795 TEST_HEADER include/spdk/zipf.h 00:03:32.795 CXX test/cpp_headers/accel.o 00:03:32.795 LINK interrupt_tgt 00:03:32.795 LINK rpc_client_test 00:03:33.052 LINK poller_perf 00:03:33.052 LINK zipf 00:03:33.052 LINK bdev_svc 00:03:33.052 LINK ioat_perf 00:03:33.052 CXX test/cpp_headers/accel_module.o 00:03:33.052 LINK spdk_trace 00:03:33.052 CC test/env/vtophys/vtophys.o 00:03:33.311 CC test/app/histogram_perf/histogram_perf.o 00:03:33.311 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.311 CXX test/cpp_headers/assert.o 00:03:33.311 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.311 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.311 CC examples/ioat/verify/verify.o 00:03:33.311 LINK vtophys 00:03:33.311 LINK test_dma 00:03:33.311 LINK histogram_perf 00:03:33.311 CC app/trace_record/trace_record.o 00:03:33.311 LINK env_dpdk_post_init 00:03:33.569 CXX test/cpp_headers/barrier.o 00:03:33.569 LINK mem_callbacks 00:03:33.569 LINK verify 00:03:33.569 CC test/env/memory/memory_ut.o 00:03:33.569 CXX test/cpp_headers/base64.o 00:03:33.569 CC test/env/pci/pci_ut.o 00:03:33.569 CXX test/cpp_headers/bdev.o 00:03:33.828 LINK spdk_trace_record 00:03:33.828 LINK nvme_fuzz 00:03:33.828 CC app/nvmf_tgt/nvmf_main.o 00:03:33.828 CC test/app/jsoncat/jsoncat.o 00:03:33.828 LINK jsoncat 00:03:33.828 CXX test/cpp_headers/bdev_module.o 00:03:33.828 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.828 LINK nvmf_tgt 00:03:33.828 CC examples/thread/thread/thread_ex.o 00:03:34.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.087 LINK pci_ut 00:03:34.087 CC app/spdk_tgt/spdk_tgt.o 00:03:34.087 CXX test/cpp_headers/bdev_zone.o 00:03:34.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.087 LINK iscsi_tgt 00:03:34.087 LINK thread 00:03:34.087 CC app/spdk_lspci/spdk_lspci.o 00:03:34.345 CC examples/sock/hello_world/hello_sock.o 00:03:34.345 CXX test/cpp_headers/bit_array.o 00:03:34.345 LINK spdk_tgt 00:03:34.345 LINK spdk_lspci 00:03:34.345 CXX test/cpp_headers/bit_pool.o 00:03:34.345 CC app/spdk_nvme_perf/perf.o 00:03:34.603 LINK hello_sock 00:03:34.603 CC app/spdk_nvme_identify/identify.o 00:03:34.603 CC test/event/event_perf/event_perf.o 00:03:34.603 LINK vhost_fuzz 00:03:34.603 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.603 CC test/app/stub/stub.o 00:03:34.603 CXX test/cpp_headers/blob_bdev.o 00:03:34.603 LINK event_perf 00:03:34.861 LINK memory_ut 00:03:34.861 LINK spdk_nvme_discover 00:03:34.861 LINK stub 00:03:34.861 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.861 CC examples/vmd/led/led.o 00:03:34.861 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.861 LINK lsvmd 00:03:34.861 CC test/event/reactor/reactor.o 00:03:34.861 LINK iscsi_fuzz 00:03:34.861 LINK led 00:03:34.861 CXX test/cpp_headers/blobfs.o 00:03:35.120 CC app/spdk_top/spdk_top.o 00:03:35.120 CC test/event/reactor_perf/reactor_perf.o 00:03:35.120 LINK reactor 00:03:35.120 CXX test/cpp_headers/blob.o 00:03:35.120 LINK reactor_perf 00:03:35.378 CC examples/idxd/perf/perf.o 00:03:35.378 CC app/vhost/vhost.o 00:03:35.378 LINK spdk_nvme_identify 00:03:35.378 CC test/nvme/aer/aer.o 00:03:35.378 CXX test/cpp_headers/conf.o 00:03:35.378 CC test/event/app_repeat/app_repeat.o 00:03:35.378 CC test/accel/dif/dif.o 00:03:35.378 LINK spdk_nvme_perf 00:03:35.378 LINK vhost 00:03:35.637 CC test/event/scheduler/scheduler.o 00:03:35.637 CXX test/cpp_headers/config.o 00:03:35.637 LINK app_repeat 00:03:35.637 CXX test/cpp_headers/cpuset.o 00:03:35.637 LINK idxd_perf 00:03:35.637 CC app/spdk_dd/spdk_dd.o 00:03:35.637 LINK aer 00:03:35.637 CXX test/cpp_headers/crc16.o 00:03:35.895 LINK scheduler 00:03:35.895 CC app/fio/nvme/fio_plugin.o 00:03:35.895 CC app/fio/bdev/fio_plugin.o 00:03:35.895 CXX test/cpp_headers/crc32.o 00:03:35.895 CC test/nvme/reset/reset.o 00:03:35.895 LINK spdk_top 00:03:35.895 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:35.895 CC test/blobfs/mkfs/mkfs.o 00:03:36.154 CXX test/cpp_headers/crc64.o 00:03:36.154 LINK spdk_dd 00:03:36.154 LINK dif 00:03:36.154 CC test/nvme/sgl/sgl.o 00:03:36.154 LINK reset 00:03:36.154 LINK mkfs 00:03:36.154 CC test/lvol/esnap/esnap.o 00:03:36.154 CXX test/cpp_headers/dif.o 00:03:36.154 LINK hello_fsdev 00:03:36.412 CXX test/cpp_headers/dma.o 00:03:36.412 LINK spdk_nvme 00:03:36.412 CXX test/cpp_headers/endian.o 00:03:36.412 LINK spdk_bdev 00:03:36.412 CXX test/cpp_headers/env_dpdk.o 00:03:36.412 CC test/nvme/e2edp/nvme_dp.o 00:03:36.412 LINK sgl 00:03:36.412 CXX test/cpp_headers/env.o 00:03:36.412 CXX test/cpp_headers/event.o 00:03:36.412 CXX test/cpp_headers/fd_group.o 00:03:36.671 CC examples/accel/perf/accel_perf.o 00:03:36.671 CXX test/cpp_headers/fd.o 00:03:36.671 CC examples/blob/hello_world/hello_blob.o 00:03:36.671 CC test/bdev/bdevio/bdevio.o 00:03:36.671 CXX test/cpp_headers/file.o 00:03:36.671 CC test/nvme/overhead/overhead.o 00:03:36.671 CXX test/cpp_headers/fsdev.o 00:03:36.671 LINK nvme_dp 00:03:36.671 CC examples/blob/cli/blobcli.o 00:03:36.928 CC test/nvme/err_injection/err_injection.o 00:03:36.928 CXX test/cpp_headers/fsdev_module.o 00:03:36.928 LINK hello_blob 00:03:36.928 LINK overhead 00:03:36.928 CC test/nvme/startup/startup.o 00:03:36.928 LINK bdevio 00:03:37.187 CC examples/nvme/hello_world/hello_world.o 00:03:37.187 CXX test/cpp_headers/ftl.o 00:03:37.187 LINK accel_perf 00:03:37.187 LINK err_injection 00:03:37.187 CC test/nvme/reserve/reserve.o 00:03:37.187 LINK startup 00:03:37.187 CC examples/nvme/reconnect/reconnect.o 00:03:37.187 LINK blobcli 00:03:37.445 LINK hello_world 00:03:37.445 CXX test/cpp_headers/fuse_dispatcher.o 00:03:37.445 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.445 CC test/nvme/simple_copy/simple_copy.o 00:03:37.445 LINK reserve 00:03:37.445 CC examples/nvme/arbitration/arbitration.o 00:03:37.445 CXX test/cpp_headers/gpt_spec.o 00:03:37.445 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.702 CC test/nvme/connect_stress/connect_stress.o 00:03:37.702 LINK simple_copy 00:03:37.702 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.702 LINK reconnect 00:03:37.702 CC examples/nvme/hotplug/hotplug.o 00:03:37.702 CXX test/cpp_headers/hexlify.o 00:03:37.702 LINK connect_stress 00:03:37.702 LINK hello_bdev 00:03:37.702 LINK arbitration 00:03:37.702 LINK nvme_manage 00:03:37.702 CC test/nvme/boot_partition/boot_partition.o 00:03:37.960 CXX test/cpp_headers/histogram_data.o 00:03:37.960 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.960 LINK hotplug 00:03:37.960 CXX test/cpp_headers/idxd.o 00:03:37.960 CC test/nvme/compliance/nvme_compliance.o 00:03:37.960 LINK boot_partition 00:03:37.960 CXX test/cpp_headers/idxd_spec.o 00:03:37.960 CC examples/nvme/abort/abort.o 00:03:37.960 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.218 LINK cmb_copy 00:03:38.218 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.218 CXX test/cpp_headers/init.o 00:03:38.218 CXX test/cpp_headers/ioat.o 00:03:38.218 LINK fused_ordering 00:03:38.218 LINK pmr_persistence 00:03:38.218 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.218 LINK nvme_compliance 00:03:38.218 CC test/nvme/fdp/fdp.o 00:03:38.520 CXX test/cpp_headers/ioat_spec.o 00:03:38.520 LINK bdevperf 00:03:38.520 CXX test/cpp_headers/iscsi_spec.o 00:03:38.520 LINK abort 00:03:38.520 CXX test/cpp_headers/json.o 00:03:38.520 CXX test/cpp_headers/jsonrpc.o 00:03:38.520 LINK doorbell_aers 00:03:38.520 CXX test/cpp_headers/keyring.o 00:03:38.520 CC test/nvme/cuse/cuse.o 00:03:38.520 CXX test/cpp_headers/keyring_module.o 00:03:38.520 CXX test/cpp_headers/likely.o 00:03:38.790 CXX test/cpp_headers/log.o 00:03:38.790 CXX test/cpp_headers/lvol.o 00:03:38.790 LINK fdp 00:03:38.790 CXX test/cpp_headers/md5.o 00:03:38.790 CXX test/cpp_headers/memory.o 00:03:38.790 CXX test/cpp_headers/mmio.o 00:03:38.790 CXX test/cpp_headers/nbd.o 00:03:38.790 CXX test/cpp_headers/net.o 00:03:38.790 CXX test/cpp_headers/notify.o 00:03:38.790 CXX test/cpp_headers/nvme.o 00:03:38.790 CC examples/nvmf/nvmf/nvmf.o 00:03:38.790 CXX test/cpp_headers/nvme_intel.o 00:03:38.790 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.048 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.048 CXX test/cpp_headers/nvme_spec.o 00:03:39.048 CXX test/cpp_headers/nvme_zns.o 00:03:39.048 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.048 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.048 CXX test/cpp_headers/nvmf.o 00:03:39.048 CXX test/cpp_headers/nvmf_spec.o 00:03:39.048 CXX test/cpp_headers/nvmf_transport.o 00:03:39.048 LINK nvmf 00:03:39.307 CXX test/cpp_headers/opal.o 00:03:39.307 CXX test/cpp_headers/opal_spec.o 00:03:39.307 CXX test/cpp_headers/pci_ids.o 00:03:39.307 CXX test/cpp_headers/pipe.o 00:03:39.307 CXX test/cpp_headers/queue.o 00:03:39.307 CXX test/cpp_headers/reduce.o 00:03:39.307 CXX test/cpp_headers/rpc.o 00:03:39.307 CXX test/cpp_headers/scheduler.o 00:03:39.307 CXX test/cpp_headers/scsi.o 00:03:39.307 CXX test/cpp_headers/scsi_spec.o 00:03:39.307 CXX test/cpp_headers/sock.o 00:03:39.307 CXX test/cpp_headers/stdinc.o 00:03:39.307 CXX test/cpp_headers/string.o 00:03:39.307 CXX test/cpp_headers/thread.o 00:03:39.566 CXX test/cpp_headers/trace.o 00:03:39.566 CXX test/cpp_headers/trace_parser.o 00:03:39.566 CXX test/cpp_headers/tree.o 00:03:39.566 CXX test/cpp_headers/ublk.o 00:03:39.566 CXX test/cpp_headers/util.o 00:03:39.566 CXX test/cpp_headers/uuid.o 00:03:39.566 CXX test/cpp_headers/version.o 00:03:39.566 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.566 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.566 CXX test/cpp_headers/vhost.o 00:03:39.566 CXX test/cpp_headers/vmd.o 00:03:39.824 CXX test/cpp_headers/xor.o 00:03:39.824 CXX test/cpp_headers/zipf.o 00:03:39.824 LINK cuse 00:03:41.199 LINK esnap 00:03:41.766 00:03:41.766 real 1m32.993s 00:03:41.766 user 8m35.612s 00:03:41.766 sys 1m42.770s 00:03:41.766 09:53:13 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:41.766 09:53:13 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.766 ************************************ 00:03:41.766 END TEST make 00:03:41.766 ************************************ 00:03:41.766 09:53:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.766 09:53:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.766 09:53:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.766 09:53:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.766 09:53:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.766 09:53:13 -- pm/common@44 -- $ pid=5242 00:03:41.766 09:53:13 -- pm/common@50 -- $ kill -TERM 5242 00:03:41.766 09:53:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.766 09:53:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.766 09:53:13 -- pm/common@44 -- $ pid=5244 00:03:41.766 09:53:13 -- pm/common@50 -- $ kill -TERM 5244 00:03:41.766 09:53:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:41.766 09:53:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:41.766 09:53:13 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.766 09:53:13 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.766 09:53:13 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.766 09:53:13 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.766 09:53:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.766 09:53:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.766 09:53:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.766 09:53:13 -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.766 09:53:13 -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.766 09:53:13 -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.766 09:53:13 -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.766 09:53:13 -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.766 09:53:13 -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.766 09:53:13 -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.766 09:53:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.766 09:53:13 -- scripts/common.sh@344 -- # case "$op" in 00:03:41.766 09:53:13 -- scripts/common.sh@345 -- # : 1 00:03:41.766 09:53:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.766 09:53:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.766 09:53:13 -- scripts/common.sh@365 -- # decimal 1 00:03:41.766 09:53:13 -- scripts/common.sh@353 -- # local d=1 00:03:41.766 09:53:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.766 09:53:13 -- scripts/common.sh@355 -- # echo 1 00:03:41.766 09:53:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.766 09:53:13 -- scripts/common.sh@366 -- # decimal 2 00:03:41.766 09:53:13 -- scripts/common.sh@353 -- # local d=2 00:03:41.766 09:53:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.766 09:53:13 -- scripts/common.sh@355 -- # echo 2 00:03:41.766 09:53:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.766 09:53:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.766 09:53:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.766 09:53:13 -- scripts/common.sh@368 -- # return 0 00:03:41.766 09:53:13 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.766 09:53:13 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.766 --rc genhtml_branch_coverage=1 00:03:41.766 --rc genhtml_function_coverage=1 00:03:41.766 --rc genhtml_legend=1 00:03:41.766 --rc geninfo_all_blocks=1 00:03:41.766 --rc geninfo_unexecuted_blocks=1 00:03:41.766 00:03:41.766 ' 00:03:41.766 09:53:13 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.767 --rc genhtml_branch_coverage=1 00:03:41.767 --rc genhtml_function_coverage=1 00:03:41.767 --rc genhtml_legend=1 00:03:41.767 --rc geninfo_all_blocks=1 00:03:41.767 --rc geninfo_unexecuted_blocks=1 00:03:41.767 00:03:41.767 ' 00:03:41.767 09:53:13 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.767 --rc genhtml_branch_coverage=1 00:03:41.767 --rc genhtml_function_coverage=1 00:03:41.767 --rc genhtml_legend=1 00:03:41.767 --rc geninfo_all_blocks=1 00:03:41.767 --rc geninfo_unexecuted_blocks=1 00:03:41.767 00:03:41.767 ' 00:03:41.767 09:53:13 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.767 --rc genhtml_branch_coverage=1 00:03:41.767 --rc genhtml_function_coverage=1 00:03:41.767 --rc genhtml_legend=1 00:03:41.767 --rc geninfo_all_blocks=1 00:03:41.767 --rc geninfo_unexecuted_blocks=1 00:03:41.767 00:03:41.767 ' 00:03:41.767 09:53:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:41.767 09:53:13 -- nvmf/common.sh@7 -- # uname -s 00:03:41.767 09:53:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.767 09:53:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.767 09:53:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.767 09:53:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.767 09:53:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.767 09:53:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.767 09:53:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.767 09:53:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.767 09:53:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.767 09:53:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.767 09:53:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:03:41.767 09:53:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:03:41.767 09:53:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.767 09:53:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.767 09:53:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:41.767 09:53:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.767 09:53:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:41.767 09:53:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:41.767 09:53:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.767 09:53:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.767 09:53:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.767 09:53:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.767 09:53:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.767 09:53:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.767 09:53:13 -- paths/export.sh@5 -- # export PATH 00:03:41.767 09:53:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.767 09:53:13 -- nvmf/common.sh@51 -- # : 0 00:03:41.767 09:53:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:41.767 09:53:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:41.767 09:53:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.767 09:53:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.767 09:53:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.767 09:53:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:41.767 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:41.767 09:53:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:41.767 09:53:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:41.767 09:53:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:41.767 09:53:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.767 09:53:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.767 09:53:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.767 09:53:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.767 09:53:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.767 09:53:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.767 09:53:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.767 09:53:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:42.026 09:53:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:42.026 09:53:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:42.026 09:53:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54364 00:03:42.026 09:53:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:42.026 09:53:13 -- pm/common@17 -- # local monitor 00:03:42.026 09:53:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.026 09:53:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:42.026 09:53:13 -- pm/common@25 -- # sleep 1 00:03:42.026 09:53:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:42.026 09:53:13 -- pm/common@21 -- # date +%s 00:03:42.026 09:53:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730713993 00:03:42.026 09:53:13 -- pm/common@21 -- # date +%s 00:03:42.026 09:53:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730713993 00:03:42.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730713993_collect-vmstat.pm.log 00:03:42.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730713993_collect-cpu-load.pm.log 00:03:42.965 09:53:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.965 09:53:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.965 09:53:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:42.965 09:53:14 -- common/autotest_common.sh@10 -- # set +x 00:03:42.965 09:53:14 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.965 09:53:14 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:42.965 09:53:14 -- common/autotest_common.sh@10 -- # set +x 00:03:42.965 09:53:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.965 09:53:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.965 09:53:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.965 09:53:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.965 09:53:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.965 09:53:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.965 09:53:15 -- common/autotest_common.sh@1455 -- # uname 00:03:42.965 09:53:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:42.965 09:53:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.965 09:53:15 -- common/autotest_common.sh@1475 -- # uname 00:03:42.965 09:53:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:42.965 09:53:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:42.965 09:53:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:42.965 lcov: LCOV version 1.15 00:03:42.965 09:53:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:01.084 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:01.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:15.983 09:53:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:15.983 09:53:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.983 09:53:47 -- common/autotest_common.sh@10 -- # set +x 00:04:15.983 09:53:47 -- spdk/autotest.sh@78 -- # rm -f 00:04:15.983 09:53:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.983 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:15.983 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:15.983 09:53:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:15.983 09:53:47 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:15.983 09:53:47 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:15.983 09:53:47 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:15.983 09:53:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.983 09:53:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:15.983 09:53:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:15.983 09:53:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.983 09:53:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:15.983 09:53:47 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:15.983 09:53:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.983 09:53:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:15.983 09:53:47 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:15.983 09:53:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.983 09:53:47 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:15.983 09:53:47 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:15.983 09:53:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:15.983 09:53:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.983 09:53:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:15.983 09:53:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.983 09:53:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.983 09:53:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:15.983 09:53:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:15.983 09:53:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.983 No valid GPT data, bailing 00:04:15.983 09:53:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.983 09:53:48 -- scripts/common.sh@394 -- # pt= 00:04:15.983 09:53:48 -- scripts/common.sh@395 -- # return 1 00:04:15.983 09:53:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.984 1+0 records in 00:04:15.984 1+0 records out 00:04:15.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457948 s, 229 MB/s 00:04:15.984 09:53:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.984 09:53:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.984 09:53:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:15.984 09:53:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:15.984 09:53:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:15.984 No valid GPT data, bailing 00:04:15.984 09:53:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.984 09:53:48 -- scripts/common.sh@394 -- # pt= 00:04:15.984 09:53:48 -- scripts/common.sh@395 -- # return 1 00:04:15.984 09:53:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:15.984 1+0 records in 00:04:15.984 1+0 records out 00:04:15.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443545 s, 236 MB/s 00:04:15.984 09:53:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.984 09:53:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.984 09:53:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:15.984 09:53:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:15.984 09:53:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:16.243 No valid GPT data, bailing 00:04:16.243 09:53:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:16.243 09:53:48 -- scripts/common.sh@394 -- # pt= 00:04:16.243 09:53:48 -- scripts/common.sh@395 -- # return 1 00:04:16.243 09:53:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:16.243 1+0 records in 00:04:16.243 1+0 records out 00:04:16.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364415 s, 288 MB/s 00:04:16.243 09:53:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.243 09:53:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.243 09:53:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:16.243 09:53:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:16.243 09:53:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:16.243 No valid GPT data, bailing 00:04:16.243 09:53:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:16.243 09:53:48 -- scripts/common.sh@394 -- # pt= 00:04:16.243 09:53:48 -- scripts/common.sh@395 -- # return 1 00:04:16.243 09:53:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:16.243 1+0 records in 00:04:16.243 1+0 records out 00:04:16.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416181 s, 252 MB/s 00:04:16.243 09:53:48 -- spdk/autotest.sh@105 -- # sync 00:04:16.243 09:53:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.243 09:53:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.243 09:53:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.145 09:53:50 -- spdk/autotest.sh@111 -- # uname -s 00:04:18.145 09:53:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:18.145 09:53:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:18.145 09:53:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.710 Hugepages 00:04:18.710 node hugesize free / total 00:04:18.710 node0 1048576kB 0 / 0 00:04:18.968 node0 2048kB 0 / 0 00:04:18.968 00:04:18.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.969 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.969 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:18.969 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:18.969 09:53:51 -- spdk/autotest.sh@117 -- # uname -s 00:04:18.969 09:53:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:18.969 09:53:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:18.969 09:53:51 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.903 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.903 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.903 09:53:51 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:20.837 09:53:52 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:20.837 09:53:52 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:20.837 09:53:52 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.837 09:53:52 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:20.837 09:53:52 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:20.837 09:53:52 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:20.837 09:53:52 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.837 09:53:52 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.837 09:53:52 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:21.096 09:53:53 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:21.096 09:53:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:21.096 09:53:53 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.364 Waiting for block devices as requested 00:04:21.364 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.364 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.664 09:53:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.664 09:53:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:21.664 09:53:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:21.664 09:53:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.664 09:53:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.664 09:53:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1541 -- # continue 00:04:21.664 09:53:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.664 09:53:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:21.664 09:53:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:21.664 09:53:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.664 09:53:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:21.664 09:53:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.664 09:53:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.664 09:53:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.664 09:53:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.664 09:53:53 -- common/autotest_common.sh@1541 -- # continue 00:04:21.664 09:53:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.664 09:53:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.664 09:53:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.664 09:53:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.664 09:53:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.664 09:53:53 -- common/autotest_common.sh@10 -- # set +x 00:04:21.664 09:53:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.489 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.489 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.489 09:53:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.489 09:53:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.489 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:22.489 09:53:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.489 09:53:54 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:22.489 09:53:54 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.489 09:53:54 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:22.489 09:53:54 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:22.489 09:53:54 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:22.489 09:53:54 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.489 09:53:54 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:22.489 09:53:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:22.489 09:53:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:22.489 09:53:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.489 09:53:54 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.489 09:53:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:22.748 09:53:54 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:22.748 09:53:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:22.748 09:53:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.748 09:53:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.748 09:53:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:22.748 09:53:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.748 09:53:54 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.748 09:53:54 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.748 09:53:54 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:22.748 09:53:54 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.748 09:53:54 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:22.748 09:53:54 -- common/autotest_common.sh@1570 -- # return 0 00:04:22.748 09:53:54 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:22.748 09:53:54 -- common/autotest_common.sh@1578 -- # return 0 00:04:22.748 09:53:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.748 09:53:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.748 09:53:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.748 09:53:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.748 09:53:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.748 09:53:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.748 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:22.748 09:53:54 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:22.748 09:53:54 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.748 09:53:54 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:22.748 09:53:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.748 09:53:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.748 09:53:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.748 09:53:54 -- common/autotest_common.sh@10 -- # set +x 00:04:22.748 ************************************ 00:04:22.748 START TEST env 00:04:22.748 ************************************ 00:04:22.748 09:53:54 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.748 * Looking for test storage... 00:04:22.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.748 09:53:54 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.749 09:53:54 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.749 09:53:54 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.749 09:53:54 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.749 09:53:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.749 09:53:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.749 09:53:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.749 09:53:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.749 09:53:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.749 09:53:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.749 09:53:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.749 09:53:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.749 09:53:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.749 09:53:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.749 09:53:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.749 09:53:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:22.749 09:53:54 env -- scripts/common.sh@345 -- # : 1 00:04:22.749 09:53:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.749 09:53:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.749 09:53:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:22.749 09:53:54 env -- scripts/common.sh@353 -- # local d=1 00:04:22.749 09:53:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.749 09:53:54 env -- scripts/common.sh@355 -- # echo 1 00:04:22.749 09:53:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.749 09:53:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:22.749 09:53:54 env -- scripts/common.sh@353 -- # local d=2 00:04:23.007 09:53:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.007 09:53:54 env -- scripts/common.sh@355 -- # echo 2 00:04:23.007 09:53:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.007 09:53:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.007 09:53:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.007 09:53:54 env -- scripts/common.sh@368 -- # return 0 00:04:23.007 09:53:54 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.007 09:53:54 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.008 --rc genhtml_branch_coverage=1 00:04:23.008 --rc genhtml_function_coverage=1 00:04:23.008 --rc genhtml_legend=1 00:04:23.008 --rc geninfo_all_blocks=1 00:04:23.008 --rc geninfo_unexecuted_blocks=1 00:04:23.008 00:04:23.008 ' 00:04:23.008 09:53:54 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.008 --rc genhtml_branch_coverage=1 00:04:23.008 --rc genhtml_function_coverage=1 00:04:23.008 --rc genhtml_legend=1 00:04:23.008 --rc geninfo_all_blocks=1 00:04:23.008 --rc geninfo_unexecuted_blocks=1 00:04:23.008 00:04:23.008 ' 00:04:23.008 09:53:54 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.008 --rc genhtml_branch_coverage=1 00:04:23.008 --rc genhtml_function_coverage=1 00:04:23.008 --rc genhtml_legend=1 00:04:23.008 --rc geninfo_all_blocks=1 00:04:23.008 --rc geninfo_unexecuted_blocks=1 00:04:23.008 00:04:23.008 ' 00:04:23.008 09:53:54 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.008 --rc genhtml_branch_coverage=1 00:04:23.008 --rc genhtml_function_coverage=1 00:04:23.008 --rc genhtml_legend=1 00:04:23.008 --rc geninfo_all_blocks=1 00:04:23.008 --rc geninfo_unexecuted_blocks=1 00:04:23.008 00:04:23.008 ' 00:04:23.008 09:53:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.008 09:53:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.008 09:53:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.008 09:53:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.008 ************************************ 00:04:23.008 START TEST env_memory 00:04:23.008 ************************************ 00:04:23.008 09:53:54 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:23.008 00:04:23.008 00:04:23.008 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.008 http://cunit.sourceforge.net/ 00:04:23.008 00:04:23.008 00:04:23.008 Suite: memory 00:04:23.008 Test: alloc and free memory map ...[2024-11-04 09:53:54.979145] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:23.008 passed 00:04:23.008 Test: mem map translation ...[2024-11-04 09:53:55.010259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:23.008 [2024-11-04 09:53:55.010293] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:23.008 [2024-11-04 09:53:55.010349] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:23.008 [2024-11-04 09:53:55.010360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:23.008 passed 00:04:23.008 Test: mem map registration ...[2024-11-04 09:53:55.073989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:23.008 [2024-11-04 09:53:55.074017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:23.008 passed 00:04:23.008 Test: mem map adjacent registrations ...passed 00:04:23.008 00:04:23.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:23.008 suites 1 1 n/a 0 0 00:04:23.008 tests 4 4 4 0 0 00:04:23.008 asserts 152 152 152 0 n/a 00:04:23.008 00:04:23.008 Elapsed time = 0.213 seconds 00:04:23.008 00:04:23.008 real 0m0.230s 00:04:23.008 user 0m0.218s 00:04:23.008 sys 0m0.009s 00:04:23.008 09:53:55 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.008 09:53:55 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.008 ************************************ 00:04:23.008 END TEST env_memory 00:04:23.008 ************************************ 00:04:23.267 09:53:55 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.267 09:53:55 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.267 09:53:55 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.267 09:53:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.267 ************************************ 00:04:23.267 START TEST env_vtophys 00:04:23.267 ************************************ 00:04:23.267 09:53:55 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.267 EAL: lib.eal log level changed from notice to debug 00:04:23.267 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.267 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.267 EAL: Maximum logical cores by configuration: 128 00:04:23.267 EAL: Detected CPU lcores: 10 00:04:23.267 EAL: Detected NUMA nodes: 1 00:04:23.267 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.267 EAL: Detected shared linkage of DPDK 00:04:23.267 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.267 EAL: Selected IOVA mode 'PA' 00:04:23.267 EAL: Probing VFIO support... 00:04:23.267 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.267 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.267 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.267 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.267 EAL: Setting up physically contiguous memory... 00:04:23.267 EAL: Setting maximum number of open files to 524288 00:04:23.267 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.267 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.267 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.267 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.267 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.267 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.267 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.267 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.267 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.267 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.267 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.267 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.267 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.267 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.267 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.267 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.267 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.267 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.267 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.267 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.267 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.267 EAL: Hugepages will be freed exactly as allocated. 00:04:23.267 EAL: No shared files mode enabled, IPC is disabled 00:04:23.267 EAL: No shared files mode enabled, IPC is disabled 00:04:23.267 EAL: TSC frequency is ~2200000 KHz 00:04:23.267 EAL: Main lcore 0 is ready (tid=7f6506ee0a00;cpuset=[0]) 00:04:23.267 EAL: Trying to obtain current memory policy. 00:04:23.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.267 EAL: Restoring previous memory policy: 0 00:04:23.267 EAL: request: mp_malloc_sync 00:04:23.267 EAL: No shared files mode enabled, IPC is disabled 00:04:23.267 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.267 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.267 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.267 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.267 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.267 00:04:23.267 00:04:23.267 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.267 http://cunit.sourceforge.net/ 00:04:23.267 00:04:23.267 00:04:23.267 Suite: components_suite 00:04:23.267 Test: vtophys_malloc_test ...passed 00:04:23.267 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.267 EAL: Restoring previous memory policy: 4 00:04:23.267 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.267 EAL: request: mp_malloc_sync 00:04:23.267 EAL: No shared files mode enabled, IPC is disabled 00:04:23.267 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.267 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.267 EAL: request: mp_malloc_sync 00:04:23.267 EAL: No shared files mode enabled, IPC is disabled 00:04:23.267 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.267 EAL: Trying to obtain current memory policy. 00:04:23.267 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.267 EAL: Restoring previous memory policy: 4 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.268 EAL: Trying to obtain current memory policy. 00:04:23.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.268 EAL: Restoring previous memory policy: 4 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.268 EAL: Trying to obtain current memory policy. 00:04:23.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.268 EAL: Restoring previous memory policy: 4 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.268 EAL: Trying to obtain current memory policy. 00:04:23.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.268 EAL: Restoring previous memory policy: 4 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.268 EAL: Trying to obtain current memory policy. 00:04:23.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.268 EAL: Restoring previous memory policy: 4 00:04:23.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.268 EAL: request: mp_malloc_sync 00:04:23.268 EAL: No shared files mode enabled, IPC is disabled 00:04:23.268 EAL: Heap on socket 0 was expanded by 66MB 00:04:23.527 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.527 EAL: request: mp_malloc_sync 00:04:23.527 EAL: No shared files mode enabled, IPC is disabled 00:04:23.527 EAL: Heap on socket 0 was shrunk by 66MB 00:04:23.527 EAL: Trying to obtain current memory policy. 00:04:23.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.527 EAL: Restoring previous memory policy: 4 00:04:23.527 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.527 EAL: request: mp_malloc_sync 00:04:23.527 EAL: No shared files mode enabled, IPC is disabled 00:04:23.527 EAL: Heap on socket 0 was expanded by 130MB 00:04:23.527 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.527 EAL: request: mp_malloc_sync 00:04:23.527 EAL: No shared files mode enabled, IPC is disabled 00:04:23.527 EAL: Heap on socket 0 was shrunk by 130MB 00:04:23.527 EAL: Trying to obtain current memory policy. 00:04:23.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.527 EAL: Restoring previous memory policy: 4 00:04:23.527 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.527 EAL: request: mp_malloc_sync 00:04:23.527 EAL: No shared files mode enabled, IPC is disabled 00:04:23.527 EAL: Heap on socket 0 was expanded by 258MB 00:04:23.527 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.527 EAL: request: mp_malloc_sync 00:04:23.527 EAL: No shared files mode enabled, IPC is disabled 00:04:23.527 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.527 EAL: Trying to obtain current memory policy. 00:04:23.527 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.786 EAL: Restoring previous memory policy: 4 00:04:23.786 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.786 EAL: request: mp_malloc_sync 00:04:23.786 EAL: No shared files mode enabled, IPC is disabled 00:04:23.786 EAL: Heap on socket 0 was expanded by 514MB 00:04:23.786 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.044 EAL: request: mp_malloc_sync 00:04:24.044 EAL: No shared files mode enabled, IPC is disabled 00:04:24.044 EAL: Heap on socket 0 was shrunk by 514MB 00:04:24.044 EAL: Trying to obtain current memory policy. 00:04:24.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.304 EAL: Restoring previous memory policy: 4 00:04:24.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.304 EAL: request: mp_malloc_sync 00:04:24.304 EAL: No shared files mode enabled, IPC is disabled 00:04:24.304 EAL: Heap on socket 0 was expanded by 1026MB 00:04:24.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.563 passed 00:04:24.563 00:04:24.563 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.563 suites 1 1 n/a 0 0 00:04:24.563 tests 2 2 2 0 0 00:04:24.563 asserts 5526 5526 5526 0 n/a 00:04:24.563 00:04:24.563 Elapsed time = 1.265 seconds 00:04:24.563 EAL: request: mp_malloc_sync 00:04:24.563 EAL: No shared files mode enabled, IPC is disabled 00:04:24.563 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:24.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.563 EAL: request: mp_malloc_sync 00:04:24.563 EAL: No shared files mode enabled, IPC is disabled 00:04:24.563 EAL: Heap on socket 0 was shrunk by 2MB 00:04:24.563 EAL: No shared files mode enabled, IPC is disabled 00:04:24.563 EAL: No shared files mode enabled, IPC is disabled 00:04:24.563 EAL: No shared files mode enabled, IPC is disabled 00:04:24.563 00:04:24.563 real 0m1.479s 00:04:24.563 user 0m0.812s 00:04:24.563 sys 0m0.533s 00:04:24.563 09:53:56 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.563 ************************************ 00:04:24.563 END TEST env_vtophys 00:04:24.563 09:53:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:24.563 ************************************ 00:04:24.866 09:53:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.866 09:53:56 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.866 09:53:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.866 09:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.866 ************************************ 00:04:24.866 START TEST env_pci 00:04:24.866 ************************************ 00:04:24.866 09:53:56 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:24.866 00:04:24.866 00:04:24.866 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.866 http://cunit.sourceforge.net/ 00:04:24.866 00:04:24.866 00:04:24.866 Suite: pci 00:04:24.866 Test: pci_hook ...[2024-11-04 09:53:56.766561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56587 has claimed it 00:04:24.866 passed 00:04:24.866 00:04:24.866 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.866 suites 1 1 n/a 0 0 00:04:24.866 tests 1 1 1 0 0 00:04:24.866 asserts 25 25 25 0 n/a 00:04:24.866 00:04:24.866 Elapsed time = 0.002 seconds 00:04:24.866 EAL: Cannot find device (10000:00:01.0) 00:04:24.866 EAL: Failed to attach device on primary process 00:04:24.866 00:04:24.866 real 0m0.021s 00:04:24.866 user 0m0.009s 00:04:24.866 sys 0m0.010s 00:04:24.866 09:53:56 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.866 09:53:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:24.866 ************************************ 00:04:24.866 END TEST env_pci 00:04:24.866 ************************************ 00:04:24.866 09:53:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:24.866 09:53:56 env -- env/env.sh@15 -- # uname 00:04:24.866 09:53:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:24.866 09:53:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:24.866 09:53:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.866 09:53:56 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:24.866 09:53:56 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.866 09:53:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.866 ************************************ 00:04:24.866 START TEST env_dpdk_post_init 00:04:24.866 ************************************ 00:04:24.866 09:53:56 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:24.866 EAL: Detected CPU lcores: 10 00:04:24.866 EAL: Detected NUMA nodes: 1 00:04:24.866 EAL: Detected shared linkage of DPDK 00:04:24.866 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:24.866 EAL: Selected IOVA mode 'PA' 00:04:24.866 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:24.866 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:24.866 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:25.125 Starting DPDK initialization... 00:04:25.125 Starting SPDK post initialization... 00:04:25.125 SPDK NVMe probe 00:04:25.125 Attaching to 0000:00:10.0 00:04:25.125 Attaching to 0000:00:11.0 00:04:25.125 Attached to 0000:00:10.0 00:04:25.125 Attached to 0000:00:11.0 00:04:25.125 Cleaning up... 00:04:25.125 00:04:25.125 real 0m0.192s 00:04:25.125 user 0m0.055s 00:04:25.125 sys 0m0.038s 00:04:25.125 09:53:57 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.125 09:53:57 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.125 ************************************ 00:04:25.125 END TEST env_dpdk_post_init 00:04:25.125 ************************************ 00:04:25.125 09:53:57 env -- env/env.sh@26 -- # uname 00:04:25.125 09:53:57 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:25.125 09:53:57 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.125 09:53:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.125 09:53:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.125 09:53:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.125 ************************************ 00:04:25.125 START TEST env_mem_callbacks 00:04:25.125 ************************************ 00:04:25.125 09:53:57 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:25.125 EAL: Detected CPU lcores: 10 00:04:25.125 EAL: Detected NUMA nodes: 1 00:04:25.125 EAL: Detected shared linkage of DPDK 00:04:25.125 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:25.125 EAL: Selected IOVA mode 'PA' 00:04:25.125 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:25.125 00:04:25.125 00:04:25.125 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.125 http://cunit.sourceforge.net/ 00:04:25.125 00:04:25.125 00:04:25.125 Suite: memory 00:04:25.125 Test: test ... 00:04:25.125 register 0x200000200000 2097152 00:04:25.125 malloc 3145728 00:04:25.125 register 0x200000400000 4194304 00:04:25.125 buf 0x200000500000 len 3145728 PASSED 00:04:25.125 malloc 64 00:04:25.125 buf 0x2000004fff40 len 64 PASSED 00:04:25.125 malloc 4194304 00:04:25.125 register 0x200000800000 6291456 00:04:25.125 buf 0x200000a00000 len 4194304 PASSED 00:04:25.125 free 0x200000500000 3145728 00:04:25.125 free 0x2000004fff40 64 00:04:25.125 unregister 0x200000400000 4194304 PASSED 00:04:25.125 free 0x200000a00000 4194304 00:04:25.125 unregister 0x200000800000 6291456 PASSED 00:04:25.125 malloc 8388608 00:04:25.125 register 0x200000400000 10485760 00:04:25.125 buf 0x200000600000 len 8388608 PASSED 00:04:25.125 free 0x200000600000 8388608 00:04:25.125 unregister 0x200000400000 10485760 PASSED 00:04:25.125 passed 00:04:25.125 00:04:25.125 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.125 suites 1 1 n/a 0 0 00:04:25.125 tests 1 1 1 0 0 00:04:25.125 asserts 15 15 15 0 n/a 00:04:25.125 00:04:25.125 Elapsed time = 0.007 seconds 00:04:25.125 00:04:25.125 real 0m0.139s 00:04:25.125 user 0m0.015s 00:04:25.125 sys 0m0.024s 00:04:25.125 09:53:57 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.125 09:53:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:25.125 ************************************ 00:04:25.125 END TEST env_mem_callbacks 00:04:25.125 ************************************ 00:04:25.125 00:04:25.125 real 0m2.526s 00:04:25.125 user 0m1.292s 00:04:25.125 sys 0m0.881s 00:04:25.125 09:53:57 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.125 09:53:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.125 ************************************ 00:04:25.125 END TEST env 00:04:25.125 ************************************ 00:04:25.385 09:53:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.385 09:53:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.385 09:53:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.385 09:53:57 -- common/autotest_common.sh@10 -- # set +x 00:04:25.385 ************************************ 00:04:25.385 START TEST rpc 00:04:25.385 ************************************ 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:25.385 * Looking for test storage... 00:04:25.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.385 09:53:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.385 09:53:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.385 09:53:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.385 09:53:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.385 09:53:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.385 09:53:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.385 09:53:57 rpc -- scripts/common.sh@345 -- # : 1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.385 09:53:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.385 09:53:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.385 09:53:57 rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.385 09:53:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.385 09:53:57 rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.385 09:53:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.385 09:53:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.385 09:53:57 rpc -- scripts/common.sh@368 -- # return 0 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:25.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.385 --rc genhtml_branch_coverage=1 00:04:25.385 --rc genhtml_function_coverage=1 00:04:25.385 --rc genhtml_legend=1 00:04:25.385 --rc geninfo_all_blocks=1 00:04:25.385 --rc geninfo_unexecuted_blocks=1 00:04:25.385 00:04:25.385 ' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:25.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.385 --rc genhtml_branch_coverage=1 00:04:25.385 --rc genhtml_function_coverage=1 00:04:25.385 --rc genhtml_legend=1 00:04:25.385 --rc geninfo_all_blocks=1 00:04:25.385 --rc geninfo_unexecuted_blocks=1 00:04:25.385 00:04:25.385 ' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:25.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.385 --rc genhtml_branch_coverage=1 00:04:25.385 --rc genhtml_function_coverage=1 00:04:25.385 --rc genhtml_legend=1 00:04:25.385 --rc geninfo_all_blocks=1 00:04:25.385 --rc geninfo_unexecuted_blocks=1 00:04:25.385 00:04:25.385 ' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:25.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.385 --rc genhtml_branch_coverage=1 00:04:25.385 --rc genhtml_function_coverage=1 00:04:25.385 --rc genhtml_legend=1 00:04:25.385 --rc geninfo_all_blocks=1 00:04:25.385 --rc geninfo_unexecuted_blocks=1 00:04:25.385 00:04:25.385 ' 00:04:25.385 09:53:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56710 00:04:25.385 09:53:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:25.385 09:53:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.385 09:53:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56710 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@833 -- # '[' -z 56710 ']' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:25.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:25.385 09:53:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.644 [2024-11-04 09:53:57.567250] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:25.644 [2024-11-04 09:53:57.567359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56710 ] 00:04:25.644 [2024-11-04 09:53:57.723833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.644 [2024-11-04 09:53:57.793927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:25.644 [2024-11-04 09:53:57.794013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56710' to capture a snapshot of events at runtime. 00:04:25.644 [2024-11-04 09:53:57.794034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:25.644 [2024-11-04 09:53:57.794049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:25.644 [2024-11-04 09:53:57.794058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56710 for offline analysis/debug. 00:04:25.644 [2024-11-04 09:53:57.794567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.903 [2024-11-04 09:53:57.868863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:26.162 09:53:58 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:26.162 09:53:58 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:26.162 09:53:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.162 09:53:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.162 09:53:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:26.162 09:53:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:26.162 09:53:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.162 09:53:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.162 09:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.162 ************************************ 00:04:26.162 START TEST rpc_integrity 00:04:26.162 ************************************ 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.162 { 00:04:26.162 "name": "Malloc0", 00:04:26.162 "aliases": [ 00:04:26.162 "49a60f0c-94a1-4b94-ab08-3181f10c7847" 00:04:26.162 ], 00:04:26.162 "product_name": "Malloc disk", 00:04:26.162 "block_size": 512, 00:04:26.162 "num_blocks": 16384, 00:04:26.162 "uuid": "49a60f0c-94a1-4b94-ab08-3181f10c7847", 00:04:26.162 "assigned_rate_limits": { 00:04:26.162 "rw_ios_per_sec": 0, 00:04:26.162 "rw_mbytes_per_sec": 0, 00:04:26.162 "r_mbytes_per_sec": 0, 00:04:26.162 "w_mbytes_per_sec": 0 00:04:26.162 }, 00:04:26.162 "claimed": false, 00:04:26.162 "zoned": false, 00:04:26.162 "supported_io_types": { 00:04:26.162 "read": true, 00:04:26.162 "write": true, 00:04:26.162 "unmap": true, 00:04:26.162 "flush": true, 00:04:26.162 "reset": true, 00:04:26.162 "nvme_admin": false, 00:04:26.162 "nvme_io": false, 00:04:26.162 "nvme_io_md": false, 00:04:26.162 "write_zeroes": true, 00:04:26.162 "zcopy": true, 00:04:26.162 "get_zone_info": false, 00:04:26.162 "zone_management": false, 00:04:26.162 "zone_append": false, 00:04:26.162 "compare": false, 00:04:26.162 "compare_and_write": false, 00:04:26.162 "abort": true, 00:04:26.162 "seek_hole": false, 00:04:26.162 "seek_data": false, 00:04:26.162 "copy": true, 00:04:26.162 "nvme_iov_md": false 00:04:26.162 }, 00:04:26.162 "memory_domains": [ 00:04:26.162 { 00:04:26.162 "dma_device_id": "system", 00:04:26.162 "dma_device_type": 1 00:04:26.162 }, 00:04:26.162 { 00:04:26.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.162 "dma_device_type": 2 00:04:26.162 } 00:04:26.162 ], 00:04:26.162 "driver_specific": {} 00:04:26.162 } 00:04:26.162 ]' 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.162 [2024-11-04 09:53:58.257885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:26.162 [2024-11-04 09:53:58.257933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:26.162 [2024-11-04 09:53:58.257949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xca1f10 00:04:26.162 [2024-11-04 09:53:58.257959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:26.162 [2024-11-04 09:53:58.259520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:26.162 [2024-11-04 09:53:58.259564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:26.162 Passthru0 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.162 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.162 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:26.162 { 00:04:26.162 "name": "Malloc0", 00:04:26.162 "aliases": [ 00:04:26.162 "49a60f0c-94a1-4b94-ab08-3181f10c7847" 00:04:26.162 ], 00:04:26.162 "product_name": "Malloc disk", 00:04:26.162 "block_size": 512, 00:04:26.162 "num_blocks": 16384, 00:04:26.162 "uuid": "49a60f0c-94a1-4b94-ab08-3181f10c7847", 00:04:26.162 "assigned_rate_limits": { 00:04:26.162 "rw_ios_per_sec": 0, 00:04:26.162 "rw_mbytes_per_sec": 0, 00:04:26.162 "r_mbytes_per_sec": 0, 00:04:26.162 "w_mbytes_per_sec": 0 00:04:26.162 }, 00:04:26.162 "claimed": true, 00:04:26.162 "claim_type": "exclusive_write", 00:04:26.162 "zoned": false, 00:04:26.162 "supported_io_types": { 00:04:26.162 "read": true, 00:04:26.162 "write": true, 00:04:26.162 "unmap": true, 00:04:26.162 "flush": true, 00:04:26.162 "reset": true, 00:04:26.162 "nvme_admin": false, 00:04:26.162 "nvme_io": false, 00:04:26.162 "nvme_io_md": false, 00:04:26.162 "write_zeroes": true, 00:04:26.162 "zcopy": true, 00:04:26.162 "get_zone_info": false, 00:04:26.162 "zone_management": false, 00:04:26.162 "zone_append": false, 00:04:26.162 "compare": false, 00:04:26.162 "compare_and_write": false, 00:04:26.162 "abort": true, 00:04:26.162 "seek_hole": false, 00:04:26.162 "seek_data": false, 00:04:26.162 "copy": true, 00:04:26.162 "nvme_iov_md": false 00:04:26.162 }, 00:04:26.162 "memory_domains": [ 00:04:26.162 { 00:04:26.162 "dma_device_id": "system", 00:04:26.162 "dma_device_type": 1 00:04:26.162 }, 00:04:26.162 { 00:04:26.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.162 "dma_device_type": 2 00:04:26.162 } 00:04:26.162 ], 00:04:26.162 "driver_specific": {} 00:04:26.162 }, 00:04:26.162 { 00:04:26.162 "name": "Passthru0", 00:04:26.162 "aliases": [ 00:04:26.162 "999be2e4-453e-5638-be46-a5653f049ae7" 00:04:26.162 ], 00:04:26.162 "product_name": "passthru", 00:04:26.162 "block_size": 512, 00:04:26.162 "num_blocks": 16384, 00:04:26.162 "uuid": "999be2e4-453e-5638-be46-a5653f049ae7", 00:04:26.162 "assigned_rate_limits": { 00:04:26.162 "rw_ios_per_sec": 0, 00:04:26.162 "rw_mbytes_per_sec": 0, 00:04:26.162 "r_mbytes_per_sec": 0, 00:04:26.162 "w_mbytes_per_sec": 0 00:04:26.162 }, 00:04:26.162 "claimed": false, 00:04:26.162 "zoned": false, 00:04:26.162 "supported_io_types": { 00:04:26.162 "read": true, 00:04:26.162 "write": true, 00:04:26.162 "unmap": true, 00:04:26.162 "flush": true, 00:04:26.162 "reset": true, 00:04:26.162 "nvme_admin": false, 00:04:26.162 "nvme_io": false, 00:04:26.162 "nvme_io_md": false, 00:04:26.162 "write_zeroes": true, 00:04:26.162 "zcopy": true, 00:04:26.162 "get_zone_info": false, 00:04:26.162 "zone_management": false, 00:04:26.162 "zone_append": false, 00:04:26.162 "compare": false, 00:04:26.162 "compare_and_write": false, 00:04:26.162 "abort": true, 00:04:26.162 "seek_hole": false, 00:04:26.162 "seek_data": false, 00:04:26.162 "copy": true, 00:04:26.162 "nvme_iov_md": false 00:04:26.162 }, 00:04:26.162 "memory_domains": [ 00:04:26.162 { 00:04:26.162 "dma_device_id": "system", 00:04:26.162 "dma_device_type": 1 00:04:26.162 }, 00:04:26.162 { 00:04:26.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.162 "dma_device_type": 2 00:04:26.162 } 00:04:26.162 ], 00:04:26.162 "driver_specific": { 00:04:26.162 "passthru": { 00:04:26.162 "name": "Passthru0", 00:04:26.162 "base_bdev_name": "Malloc0" 00:04:26.162 } 00:04:26.162 } 00:04:26.162 } 00:04:26.162 ]' 00:04:26.163 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:26.421 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:26.421 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.421 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.421 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:26.421 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.422 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.422 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:26.422 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:26.422 09:53:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:26.422 00:04:26.422 real 0m0.330s 00:04:26.422 user 0m0.219s 00:04:26.422 sys 0m0.041s 00:04:26.422 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.422 09:53:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 ************************************ 00:04:26.422 END TEST rpc_integrity 00:04:26.422 ************************************ 00:04:26.422 09:53:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:26.422 09:53:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.422 09:53:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.422 09:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 ************************************ 00:04:26.422 START TEST rpc_plugins 00:04:26.422 ************************************ 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:26.422 { 00:04:26.422 "name": "Malloc1", 00:04:26.422 "aliases": [ 00:04:26.422 "bc85f4fd-c151-47f9-ae64-12261244bc99" 00:04:26.422 ], 00:04:26.422 "product_name": "Malloc disk", 00:04:26.422 "block_size": 4096, 00:04:26.422 "num_blocks": 256, 00:04:26.422 "uuid": "bc85f4fd-c151-47f9-ae64-12261244bc99", 00:04:26.422 "assigned_rate_limits": { 00:04:26.422 "rw_ios_per_sec": 0, 00:04:26.422 "rw_mbytes_per_sec": 0, 00:04:26.422 "r_mbytes_per_sec": 0, 00:04:26.422 "w_mbytes_per_sec": 0 00:04:26.422 }, 00:04:26.422 "claimed": false, 00:04:26.422 "zoned": false, 00:04:26.422 "supported_io_types": { 00:04:26.422 "read": true, 00:04:26.422 "write": true, 00:04:26.422 "unmap": true, 00:04:26.422 "flush": true, 00:04:26.422 "reset": true, 00:04:26.422 "nvme_admin": false, 00:04:26.422 "nvme_io": false, 00:04:26.422 "nvme_io_md": false, 00:04:26.422 "write_zeroes": true, 00:04:26.422 "zcopy": true, 00:04:26.422 "get_zone_info": false, 00:04:26.422 "zone_management": false, 00:04:26.422 "zone_append": false, 00:04:26.422 "compare": false, 00:04:26.422 "compare_and_write": false, 00:04:26.422 "abort": true, 00:04:26.422 "seek_hole": false, 00:04:26.422 "seek_data": false, 00:04:26.422 "copy": true, 00:04:26.422 "nvme_iov_md": false 00:04:26.422 }, 00:04:26.422 "memory_domains": [ 00:04:26.422 { 00:04:26.422 "dma_device_id": "system", 00:04:26.422 "dma_device_type": 1 00:04:26.422 }, 00:04:26.422 { 00:04:26.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.422 "dma_device_type": 2 00:04:26.422 } 00:04:26.422 ], 00:04:26.422 "driver_specific": {} 00:04:26.422 } 00:04:26.422 ]' 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.422 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:26.422 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:26.681 09:53:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:26.681 00:04:26.681 real 0m0.157s 00:04:26.681 user 0m0.105s 00:04:26.681 sys 0m0.014s 00:04:26.681 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.681 ************************************ 00:04:26.681 END TEST rpc_plugins 00:04:26.681 09:53:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:26.681 ************************************ 00:04:26.681 09:53:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:26.681 09:53:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.681 09:53:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.681 09:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.681 ************************************ 00:04:26.681 START TEST rpc_trace_cmd_test 00:04:26.681 ************************************ 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:26.681 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56710", 00:04:26.681 "tpoint_group_mask": "0x8", 00:04:26.681 "iscsi_conn": { 00:04:26.681 "mask": "0x2", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "scsi": { 00:04:26.681 "mask": "0x4", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "bdev": { 00:04:26.681 "mask": "0x8", 00:04:26.681 "tpoint_mask": "0xffffffffffffffff" 00:04:26.681 }, 00:04:26.681 "nvmf_rdma": { 00:04:26.681 "mask": "0x10", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "nvmf_tcp": { 00:04:26.681 "mask": "0x20", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "ftl": { 00:04:26.681 "mask": "0x40", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "blobfs": { 00:04:26.681 "mask": "0x80", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "dsa": { 00:04:26.681 "mask": "0x200", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "thread": { 00:04:26.681 "mask": "0x400", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "nvme_pcie": { 00:04:26.681 "mask": "0x800", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "iaa": { 00:04:26.681 "mask": "0x1000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "nvme_tcp": { 00:04:26.681 "mask": "0x2000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "bdev_nvme": { 00:04:26.681 "mask": "0x4000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "sock": { 00:04:26.681 "mask": "0x8000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "blob": { 00:04:26.681 "mask": "0x10000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "bdev_raid": { 00:04:26.681 "mask": "0x20000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 }, 00:04:26.681 "scheduler": { 00:04:26.681 "mask": "0x40000", 00:04:26.681 "tpoint_mask": "0x0" 00:04:26.681 } 00:04:26.681 }' 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:26.681 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:26.940 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:26.940 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:26.940 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:26.940 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:26.940 09:53:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:26.940 00:04:26.940 real 0m0.268s 00:04:26.940 user 0m0.227s 00:04:26.940 sys 0m0.032s 00:04:26.941 09:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.941 09:53:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:26.941 ************************************ 00:04:26.941 END TEST rpc_trace_cmd_test 00:04:26.941 ************************************ 00:04:26.941 09:53:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:26.941 09:53:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:26.941 09:53:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:26.941 09:53:58 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.941 09:53:58 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.941 09:53:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.941 ************************************ 00:04:26.941 START TEST rpc_daemon_integrity 00:04:26.941 ************************************ 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:26.941 { 00:04:26.941 "name": "Malloc2", 00:04:26.941 "aliases": [ 00:04:26.941 "17e227bc-c1f8-4b73-978a-bf50af826d34" 00:04:26.941 ], 00:04:26.941 "product_name": "Malloc disk", 00:04:26.941 "block_size": 512, 00:04:26.941 "num_blocks": 16384, 00:04:26.941 "uuid": "17e227bc-c1f8-4b73-978a-bf50af826d34", 00:04:26.941 "assigned_rate_limits": { 00:04:26.941 "rw_ios_per_sec": 0, 00:04:26.941 "rw_mbytes_per_sec": 0, 00:04:26.941 "r_mbytes_per_sec": 0, 00:04:26.941 "w_mbytes_per_sec": 0 00:04:26.941 }, 00:04:26.941 "claimed": false, 00:04:26.941 "zoned": false, 00:04:26.941 "supported_io_types": { 00:04:26.941 "read": true, 00:04:26.941 "write": true, 00:04:26.941 "unmap": true, 00:04:26.941 "flush": true, 00:04:26.941 "reset": true, 00:04:26.941 "nvme_admin": false, 00:04:26.941 "nvme_io": false, 00:04:26.941 "nvme_io_md": false, 00:04:26.941 "write_zeroes": true, 00:04:26.941 "zcopy": true, 00:04:26.941 "get_zone_info": false, 00:04:26.941 "zone_management": false, 00:04:26.941 "zone_append": false, 00:04:26.941 "compare": false, 00:04:26.941 "compare_and_write": false, 00:04:26.941 "abort": true, 00:04:26.941 "seek_hole": false, 00:04:26.941 "seek_data": false, 00:04:26.941 "copy": true, 00:04:26.941 "nvme_iov_md": false 00:04:26.941 }, 00:04:26.941 "memory_domains": [ 00:04:26.941 { 00:04:26.941 "dma_device_id": "system", 00:04:26.941 "dma_device_type": 1 00:04:26.941 }, 00:04:26.941 { 00:04:26.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:26.941 "dma_device_type": 2 00:04:26.941 } 00:04:26.941 ], 00:04:26.941 "driver_specific": {} 00:04:26.941 } 00:04:26.941 ]' 00:04:26.941 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.200 [2024-11-04 09:53:59.155169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:27.200 [2024-11-04 09:53:59.155214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:27.200 [2024-11-04 09:53:59.155231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe482f0 00:04:27.200 [2024-11-04 09:53:59.155240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:27.200 [2024-11-04 09:53:59.156722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:27.200 [2024-11-04 09:53:59.156754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:27.200 Passthru0 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.200 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:27.200 { 00:04:27.200 "name": "Malloc2", 00:04:27.200 "aliases": [ 00:04:27.200 "17e227bc-c1f8-4b73-978a-bf50af826d34" 00:04:27.200 ], 00:04:27.200 "product_name": "Malloc disk", 00:04:27.200 "block_size": 512, 00:04:27.200 "num_blocks": 16384, 00:04:27.200 "uuid": "17e227bc-c1f8-4b73-978a-bf50af826d34", 00:04:27.200 "assigned_rate_limits": { 00:04:27.200 "rw_ios_per_sec": 0, 00:04:27.200 "rw_mbytes_per_sec": 0, 00:04:27.200 "r_mbytes_per_sec": 0, 00:04:27.200 "w_mbytes_per_sec": 0 00:04:27.200 }, 00:04:27.200 "claimed": true, 00:04:27.200 "claim_type": "exclusive_write", 00:04:27.200 "zoned": false, 00:04:27.200 "supported_io_types": { 00:04:27.200 "read": true, 00:04:27.200 "write": true, 00:04:27.200 "unmap": true, 00:04:27.201 "flush": true, 00:04:27.201 "reset": true, 00:04:27.201 "nvme_admin": false, 00:04:27.201 "nvme_io": false, 00:04:27.201 "nvme_io_md": false, 00:04:27.201 "write_zeroes": true, 00:04:27.201 "zcopy": true, 00:04:27.201 "get_zone_info": false, 00:04:27.201 "zone_management": false, 00:04:27.201 "zone_append": false, 00:04:27.201 "compare": false, 00:04:27.201 "compare_and_write": false, 00:04:27.201 "abort": true, 00:04:27.201 "seek_hole": false, 00:04:27.201 "seek_data": false, 00:04:27.201 "copy": true, 00:04:27.201 "nvme_iov_md": false 00:04:27.201 }, 00:04:27.201 "memory_domains": [ 00:04:27.201 { 00:04:27.201 "dma_device_id": "system", 00:04:27.201 "dma_device_type": 1 00:04:27.201 }, 00:04:27.201 { 00:04:27.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.201 "dma_device_type": 2 00:04:27.201 } 00:04:27.201 ], 00:04:27.201 "driver_specific": {} 00:04:27.201 }, 00:04:27.201 { 00:04:27.201 "name": "Passthru0", 00:04:27.201 "aliases": [ 00:04:27.201 "75e9916d-3c63-5012-9123-e4b2c636e7c6" 00:04:27.201 ], 00:04:27.201 "product_name": "passthru", 00:04:27.201 "block_size": 512, 00:04:27.201 "num_blocks": 16384, 00:04:27.201 "uuid": "75e9916d-3c63-5012-9123-e4b2c636e7c6", 00:04:27.201 "assigned_rate_limits": { 00:04:27.201 "rw_ios_per_sec": 0, 00:04:27.201 "rw_mbytes_per_sec": 0, 00:04:27.201 "r_mbytes_per_sec": 0, 00:04:27.201 "w_mbytes_per_sec": 0 00:04:27.201 }, 00:04:27.201 "claimed": false, 00:04:27.201 "zoned": false, 00:04:27.201 "supported_io_types": { 00:04:27.201 "read": true, 00:04:27.201 "write": true, 00:04:27.201 "unmap": true, 00:04:27.201 "flush": true, 00:04:27.201 "reset": true, 00:04:27.201 "nvme_admin": false, 00:04:27.201 "nvme_io": false, 00:04:27.201 "nvme_io_md": false, 00:04:27.201 "write_zeroes": true, 00:04:27.201 "zcopy": true, 00:04:27.201 "get_zone_info": false, 00:04:27.201 "zone_management": false, 00:04:27.201 "zone_append": false, 00:04:27.201 "compare": false, 00:04:27.201 "compare_and_write": false, 00:04:27.201 "abort": true, 00:04:27.201 "seek_hole": false, 00:04:27.201 "seek_data": false, 00:04:27.201 "copy": true, 00:04:27.201 "nvme_iov_md": false 00:04:27.201 }, 00:04:27.201 "memory_domains": [ 00:04:27.201 { 00:04:27.201 "dma_device_id": "system", 00:04:27.201 "dma_device_type": 1 00:04:27.201 }, 00:04:27.201 { 00:04:27.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:27.201 "dma_device_type": 2 00:04:27.201 } 00:04:27.201 ], 00:04:27.201 "driver_specific": { 00:04:27.201 "passthru": { 00:04:27.201 "name": "Passthru0", 00:04:27.201 "base_bdev_name": "Malloc2" 00:04:27.201 } 00:04:27.201 } 00:04:27.201 } 00:04:27.201 ]' 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:27.201 00:04:27.201 real 0m0.308s 00:04:27.201 user 0m0.208s 00:04:27.201 sys 0m0.042s 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.201 09:53:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:27.201 ************************************ 00:04:27.201 END TEST rpc_daemon_integrity 00:04:27.201 ************************************ 00:04:27.201 09:53:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:27.201 09:53:59 rpc -- rpc/rpc.sh@84 -- # killprocess 56710 00:04:27.201 09:53:59 rpc -- common/autotest_common.sh@952 -- # '[' -z 56710 ']' 00:04:27.201 09:53:59 rpc -- common/autotest_common.sh@956 -- # kill -0 56710 00:04:27.201 09:53:59 rpc -- common/autotest_common.sh@957 -- # uname 00:04:27.201 09:53:59 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:27.201 09:53:59 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56710 00:04:27.460 09:53:59 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:27.460 09:53:59 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:27.460 killing process with pid 56710 00:04:27.460 09:53:59 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56710' 00:04:27.460 09:53:59 rpc -- common/autotest_common.sh@971 -- # kill 56710 00:04:27.460 09:53:59 rpc -- common/autotest_common.sh@976 -- # wait 56710 00:04:27.719 00:04:27.719 real 0m2.469s 00:04:27.719 user 0m3.147s 00:04:27.719 sys 0m0.678s 00:04:27.719 09:53:59 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.719 09:53:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.719 ************************************ 00:04:27.719 END TEST rpc 00:04:27.719 ************************************ 00:04:27.719 09:53:59 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.719 09:53:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.719 09:53:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.719 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:04:27.719 ************************************ 00:04:27.719 START TEST skip_rpc 00:04:27.719 ************************************ 00:04:27.719 09:53:59 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:27.978 * Looking for test storage... 00:04:27.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.978 09:53:59 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.978 09:53:59 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.978 09:53:59 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.978 09:54:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.978 --rc genhtml_branch_coverage=1 00:04:27.978 --rc genhtml_function_coverage=1 00:04:27.978 --rc genhtml_legend=1 00:04:27.978 --rc geninfo_all_blocks=1 00:04:27.978 --rc geninfo_unexecuted_blocks=1 00:04:27.978 00:04:27.978 ' 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.978 --rc genhtml_branch_coverage=1 00:04:27.978 --rc genhtml_function_coverage=1 00:04:27.978 --rc genhtml_legend=1 00:04:27.978 --rc geninfo_all_blocks=1 00:04:27.978 --rc geninfo_unexecuted_blocks=1 00:04:27.978 00:04:27.978 ' 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.978 --rc genhtml_branch_coverage=1 00:04:27.978 --rc genhtml_function_coverage=1 00:04:27.978 --rc genhtml_legend=1 00:04:27.978 --rc geninfo_all_blocks=1 00:04:27.978 --rc geninfo_unexecuted_blocks=1 00:04:27.978 00:04:27.978 ' 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.978 --rc genhtml_branch_coverage=1 00:04:27.978 --rc genhtml_function_coverage=1 00:04:27.978 --rc genhtml_legend=1 00:04:27.978 --rc geninfo_all_blocks=1 00:04:27.978 --rc geninfo_unexecuted_blocks=1 00:04:27.978 00:04:27.978 ' 00:04:27.978 09:54:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.978 09:54:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:27.978 09:54:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.978 09:54:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.978 ************************************ 00:04:27.978 START TEST skip_rpc 00:04:27.978 ************************************ 00:04:27.978 09:54:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:27.978 09:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56909 00:04:27.978 09:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:27.978 09:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.978 09:54:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:27.978 [2024-11-04 09:54:00.099529] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:27.978 [2024-11-04 09:54:00.099653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56909 ] 00:04:28.237 [2024-11-04 09:54:00.245985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.237 [2024-11-04 09:54:00.307232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.237 [2024-11-04 09:54:00.377652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56909 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56909 ']' 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56909 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56909 00:04:33.507 killing process with pid 56909 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56909' 00:04:33.507 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56909 00:04:33.508 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56909 00:04:33.508 00:04:33.508 real 0m5.450s 00:04:33.508 user 0m5.067s 00:04:33.508 sys 0m0.297s 00:04:33.508 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.508 ************************************ 00:04:33.508 END TEST skip_rpc 00:04:33.508 ************************************ 00:04:33.508 09:54:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.508 09:54:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:33.508 09:54:05 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.508 09:54:05 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.508 09:54:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.508 ************************************ 00:04:33.508 START TEST skip_rpc_with_json 00:04:33.508 ************************************ 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56990 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56990 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 56990 ']' 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:33.508 09:54:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.508 [2024-11-04 09:54:05.604790] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:33.508 [2024-11-04 09:54:05.605067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56990 ] 00:04:33.767 [2024-11-04 09:54:05.753826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.767 [2024-11-04 09:54:05.810050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.767 [2024-11-04 09:54:05.876679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.026 [2024-11-04 09:54:06.076744] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:34.026 request: 00:04:34.026 { 00:04:34.026 "trtype": "tcp", 00:04:34.026 "method": "nvmf_get_transports", 00:04:34.026 "req_id": 1 00:04:34.026 } 00:04:34.026 Got JSON-RPC error response 00:04:34.026 response: 00:04:34.026 { 00:04:34.026 "code": -19, 00:04:34.026 "message": "No such device" 00:04:34.026 } 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.026 [2024-11-04 09:54:06.088917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.026 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.286 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.286 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.286 { 00:04:34.286 "subsystems": [ 00:04:34.286 { 00:04:34.286 "subsystem": "fsdev", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "fsdev_set_opts", 00:04:34.286 "params": { 00:04:34.286 "fsdev_io_pool_size": 65535, 00:04:34.286 "fsdev_io_cache_size": 256 00:04:34.286 } 00:04:34.286 } 00:04:34.286 ] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "keyring", 00:04:34.286 "config": [] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "iobuf", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "iobuf_set_options", 00:04:34.286 "params": { 00:04:34.286 "small_pool_count": 8192, 00:04:34.286 "large_pool_count": 1024, 00:04:34.286 "small_bufsize": 8192, 00:04:34.286 "large_bufsize": 135168, 00:04:34.286 "enable_numa": false 00:04:34.286 } 00:04:34.286 } 00:04:34.286 ] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "sock", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "sock_set_default_impl", 00:04:34.286 "params": { 00:04:34.286 "impl_name": "uring" 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "sock_impl_set_options", 00:04:34.286 "params": { 00:04:34.286 "impl_name": "ssl", 00:04:34.286 "recv_buf_size": 4096, 00:04:34.286 "send_buf_size": 4096, 00:04:34.286 "enable_recv_pipe": true, 00:04:34.286 "enable_quickack": false, 00:04:34.286 "enable_placement_id": 0, 00:04:34.286 "enable_zerocopy_send_server": true, 00:04:34.286 "enable_zerocopy_send_client": false, 00:04:34.286 "zerocopy_threshold": 0, 00:04:34.286 "tls_version": 0, 00:04:34.286 "enable_ktls": false 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "sock_impl_set_options", 00:04:34.286 "params": { 00:04:34.286 "impl_name": "posix", 00:04:34.286 "recv_buf_size": 2097152, 00:04:34.286 "send_buf_size": 2097152, 00:04:34.286 "enable_recv_pipe": true, 00:04:34.286 "enable_quickack": false, 00:04:34.286 "enable_placement_id": 0, 00:04:34.286 "enable_zerocopy_send_server": true, 00:04:34.286 "enable_zerocopy_send_client": false, 00:04:34.286 "zerocopy_threshold": 0, 00:04:34.286 "tls_version": 0, 00:04:34.286 "enable_ktls": false 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "sock_impl_set_options", 00:04:34.286 "params": { 00:04:34.286 "impl_name": "uring", 00:04:34.286 "recv_buf_size": 2097152, 00:04:34.286 "send_buf_size": 2097152, 00:04:34.286 "enable_recv_pipe": true, 00:04:34.286 "enable_quickack": false, 00:04:34.286 "enable_placement_id": 0, 00:04:34.286 "enable_zerocopy_send_server": false, 00:04:34.286 "enable_zerocopy_send_client": false, 00:04:34.286 "zerocopy_threshold": 0, 00:04:34.286 "tls_version": 0, 00:04:34.286 "enable_ktls": false 00:04:34.286 } 00:04:34.286 } 00:04:34.286 ] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "vmd", 00:04:34.286 "config": [] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "accel", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "accel_set_options", 00:04:34.286 "params": { 00:04:34.286 "small_cache_size": 128, 00:04:34.286 "large_cache_size": 16, 00:04:34.286 "task_count": 2048, 00:04:34.286 "sequence_count": 2048, 00:04:34.286 "buf_count": 2048 00:04:34.286 } 00:04:34.286 } 00:04:34.286 ] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "bdev", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "bdev_set_options", 00:04:34.286 "params": { 00:04:34.286 "bdev_io_pool_size": 65535, 00:04:34.286 "bdev_io_cache_size": 256, 00:04:34.286 "bdev_auto_examine": true, 00:04:34.286 "iobuf_small_cache_size": 128, 00:04:34.286 "iobuf_large_cache_size": 16 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "bdev_raid_set_options", 00:04:34.286 "params": { 00:04:34.286 "process_window_size_kb": 1024, 00:04:34.286 "process_max_bandwidth_mb_sec": 0 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "bdev_iscsi_set_options", 00:04:34.286 "params": { 00:04:34.286 "timeout_sec": 30 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "bdev_nvme_set_options", 00:04:34.286 "params": { 00:04:34.286 "action_on_timeout": "none", 00:04:34.286 "timeout_us": 0, 00:04:34.286 "timeout_admin_us": 0, 00:04:34.286 "keep_alive_timeout_ms": 10000, 00:04:34.286 "arbitration_burst": 0, 00:04:34.286 "low_priority_weight": 0, 00:04:34.286 "medium_priority_weight": 0, 00:04:34.286 "high_priority_weight": 0, 00:04:34.286 "nvme_adminq_poll_period_us": 10000, 00:04:34.286 "nvme_ioq_poll_period_us": 0, 00:04:34.286 "io_queue_requests": 0, 00:04:34.286 "delay_cmd_submit": true, 00:04:34.286 "transport_retry_count": 4, 00:04:34.286 "bdev_retry_count": 3, 00:04:34.286 "transport_ack_timeout": 0, 00:04:34.286 "ctrlr_loss_timeout_sec": 0, 00:04:34.286 "reconnect_delay_sec": 0, 00:04:34.286 "fast_io_fail_timeout_sec": 0, 00:04:34.286 "disable_auto_failback": false, 00:04:34.286 "generate_uuids": false, 00:04:34.286 "transport_tos": 0, 00:04:34.286 "nvme_error_stat": false, 00:04:34.286 "rdma_srq_size": 0, 00:04:34.286 "io_path_stat": false, 00:04:34.286 "allow_accel_sequence": false, 00:04:34.286 "rdma_max_cq_size": 0, 00:04:34.286 "rdma_cm_event_timeout_ms": 0, 00:04:34.286 "dhchap_digests": [ 00:04:34.286 "sha256", 00:04:34.286 "sha384", 00:04:34.286 "sha512" 00:04:34.286 ], 00:04:34.286 "dhchap_dhgroups": [ 00:04:34.286 "null", 00:04:34.286 "ffdhe2048", 00:04:34.286 "ffdhe3072", 00:04:34.286 "ffdhe4096", 00:04:34.286 "ffdhe6144", 00:04:34.286 "ffdhe8192" 00:04:34.286 ] 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "bdev_nvme_set_hotplug", 00:04:34.286 "params": { 00:04:34.286 "period_us": 100000, 00:04:34.286 "enable": false 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "bdev_wait_for_examine" 00:04:34.286 } 00:04:34.286 ] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "scsi", 00:04:34.286 "config": null 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "scheduler", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "framework_set_scheduler", 00:04:34.286 "params": { 00:04:34.286 "name": "static" 00:04:34.286 } 00:04:34.286 } 00:04:34.286 ] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "vhost_scsi", 00:04:34.286 "config": [] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "vhost_blk", 00:04:34.286 "config": [] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "ublk", 00:04:34.286 "config": [] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "nbd", 00:04:34.286 "config": [] 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "subsystem": "nvmf", 00:04:34.286 "config": [ 00:04:34.286 { 00:04:34.286 "method": "nvmf_set_config", 00:04:34.286 "params": { 00:04:34.286 "discovery_filter": "match_any", 00:04:34.286 "admin_cmd_passthru": { 00:04:34.286 "identify_ctrlr": false 00:04:34.286 }, 00:04:34.286 "dhchap_digests": [ 00:04:34.286 "sha256", 00:04:34.286 "sha384", 00:04:34.286 "sha512" 00:04:34.286 ], 00:04:34.286 "dhchap_dhgroups": [ 00:04:34.286 "null", 00:04:34.286 "ffdhe2048", 00:04:34.286 "ffdhe3072", 00:04:34.286 "ffdhe4096", 00:04:34.286 "ffdhe6144", 00:04:34.286 "ffdhe8192" 00:04:34.286 ] 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "nvmf_set_max_subsystems", 00:04:34.286 "params": { 00:04:34.286 "max_subsystems": 1024 00:04:34.286 } 00:04:34.286 }, 00:04:34.286 { 00:04:34.286 "method": "nvmf_set_crdt", 00:04:34.286 "params": { 00:04:34.286 "crdt1": 0, 00:04:34.286 "crdt2": 0, 00:04:34.286 "crdt3": 0 00:04:34.287 } 00:04:34.287 }, 00:04:34.287 { 00:04:34.287 "method": "nvmf_create_transport", 00:04:34.287 "params": { 00:04:34.287 "trtype": "TCP", 00:04:34.287 "max_queue_depth": 128, 00:04:34.287 "max_io_qpairs_per_ctrlr": 127, 00:04:34.287 "in_capsule_data_size": 4096, 00:04:34.287 "max_io_size": 131072, 00:04:34.287 "io_unit_size": 131072, 00:04:34.287 "max_aq_depth": 128, 00:04:34.287 "num_shared_buffers": 511, 00:04:34.287 "buf_cache_size": 4294967295, 00:04:34.287 "dif_insert_or_strip": false, 00:04:34.287 "zcopy": false, 00:04:34.287 "c2h_success": true, 00:04:34.287 "sock_priority": 0, 00:04:34.287 "abort_timeout_sec": 1, 00:04:34.287 "ack_timeout": 0, 00:04:34.287 "data_wr_pool_size": 0 00:04:34.287 } 00:04:34.287 } 00:04:34.287 ] 00:04:34.287 }, 00:04:34.287 { 00:04:34.287 "subsystem": "iscsi", 00:04:34.287 "config": [ 00:04:34.287 { 00:04:34.287 "method": "iscsi_set_options", 00:04:34.287 "params": { 00:04:34.287 "node_base": "iqn.2016-06.io.spdk", 00:04:34.287 "max_sessions": 128, 00:04:34.287 "max_connections_per_session": 2, 00:04:34.287 "max_queue_depth": 64, 00:04:34.287 "default_time2wait": 2, 00:04:34.287 "default_time2retain": 20, 00:04:34.287 "first_burst_length": 8192, 00:04:34.287 "immediate_data": true, 00:04:34.287 "allow_duplicated_isid": false, 00:04:34.287 "error_recovery_level": 0, 00:04:34.287 "nop_timeout": 60, 00:04:34.287 "nop_in_interval": 30, 00:04:34.287 "disable_chap": false, 00:04:34.287 "require_chap": false, 00:04:34.287 "mutual_chap": false, 00:04:34.287 "chap_group": 0, 00:04:34.287 "max_large_datain_per_connection": 64, 00:04:34.287 "max_r2t_per_connection": 4, 00:04:34.287 "pdu_pool_size": 36864, 00:04:34.287 "immediate_data_pool_size": 16384, 00:04:34.287 "data_out_pool_size": 2048 00:04:34.287 } 00:04:34.287 } 00:04:34.287 ] 00:04:34.287 } 00:04:34.287 ] 00:04:34.287 } 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56990 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 56990 ']' 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 56990 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56990 00:04:34.287 killing process with pid 56990 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56990' 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 56990 00:04:34.287 09:54:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 56990 00:04:34.545 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57015 00:04:34.545 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:34.545 09:54:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57015 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57015 ']' 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57015 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57015 00:04:39.839 killing process with pid 57015 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57015' 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57015 00:04:39.839 09:54:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57015 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.098 00:04:40.098 real 0m6.598s 00:04:40.098 user 0m6.150s 00:04:40.098 sys 0m0.634s 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.098 ************************************ 00:04:40.098 END TEST skip_rpc_with_json 00:04:40.098 ************************************ 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.098 09:54:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.098 09:54:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.098 09:54:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.098 09:54:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.098 ************************************ 00:04:40.098 START TEST skip_rpc_with_delay 00:04:40.098 ************************************ 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.098 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.098 [2024-11-04 09:54:12.262498] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.357 ************************************ 00:04:40.357 END TEST skip_rpc_with_delay 00:04:40.357 ************************************ 00:04:40.357 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:40.357 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.357 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.357 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.357 00:04:40.357 real 0m0.096s 00:04:40.357 user 0m0.063s 00:04:40.357 sys 0m0.030s 00:04:40.357 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.357 09:54:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.357 09:54:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.357 09:54:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.357 09:54:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.357 09:54:12 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:40.357 09:54:12 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.357 09:54:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.357 ************************************ 00:04:40.357 START TEST exit_on_failed_rpc_init 00:04:40.357 ************************************ 00:04:40.357 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:40.357 09:54:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57125 00:04:40.357 09:54:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.357 09:54:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57125 00:04:40.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.357 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57125 ']' 00:04:40.358 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.358 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:40.358 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.358 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:40.358 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.358 [2024-11-04 09:54:12.407416] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:40.358 [2024-11-04 09:54:12.407836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57125 ] 00:04:40.617 [2024-11-04 09:54:12.553690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.617 [2024-11-04 09:54:12.607649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.617 [2024-11-04 09:54:12.682683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.876 09:54:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.876 [2024-11-04 09:54:12.957816] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:40.876 [2024-11-04 09:54:12.957930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57135 ] 00:04:41.135 [2024-11-04 09:54:13.112186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.135 [2024-11-04 09:54:13.179731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.135 [2024-11-04 09:54:13.179841] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:41.135 [2024-11-04 09:54:13.179859] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:41.135 [2024-11-04 09:54:13.179869] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57125 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57125 ']' 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57125 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57125 00:04:41.135 killing process with pid 57125 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57125' 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57125 00:04:41.135 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57125 00:04:41.703 00:04:41.703 real 0m1.331s 00:04:41.703 user 0m1.453s 00:04:41.703 sys 0m0.378s 00:04:41.703 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.703 09:54:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.703 ************************************ 00:04:41.703 END TEST exit_on_failed_rpc_init 00:04:41.703 ************************************ 00:04:41.703 09:54:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.703 00:04:41.703 real 0m13.884s 00:04:41.703 user 0m12.915s 00:04:41.703 sys 0m1.556s 00:04:41.703 09:54:13 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.703 09:54:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.703 ************************************ 00:04:41.703 END TEST skip_rpc 00:04:41.703 ************************************ 00:04:41.703 09:54:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.703 09:54:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.703 09:54:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.703 09:54:13 -- common/autotest_common.sh@10 -- # set +x 00:04:41.703 ************************************ 00:04:41.703 START TEST rpc_client 00:04:41.703 ************************************ 00:04:41.703 09:54:13 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:41.703 * Looking for test storage... 00:04:41.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:41.703 09:54:13 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.703 09:54:13 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.703 09:54:13 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.962 09:54:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.962 --rc genhtml_branch_coverage=1 00:04:41.962 --rc genhtml_function_coverage=1 00:04:41.962 --rc genhtml_legend=1 00:04:41.962 --rc geninfo_all_blocks=1 00:04:41.962 --rc geninfo_unexecuted_blocks=1 00:04:41.962 00:04:41.962 ' 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.962 --rc genhtml_branch_coverage=1 00:04:41.962 --rc genhtml_function_coverage=1 00:04:41.962 --rc genhtml_legend=1 00:04:41.962 --rc geninfo_all_blocks=1 00:04:41.962 --rc geninfo_unexecuted_blocks=1 00:04:41.962 00:04:41.962 ' 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.962 --rc genhtml_branch_coverage=1 00:04:41.962 --rc genhtml_function_coverage=1 00:04:41.962 --rc genhtml_legend=1 00:04:41.962 --rc geninfo_all_blocks=1 00:04:41.962 --rc geninfo_unexecuted_blocks=1 00:04:41.962 00:04:41.962 ' 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.962 --rc genhtml_branch_coverage=1 00:04:41.962 --rc genhtml_function_coverage=1 00:04:41.962 --rc genhtml_legend=1 00:04:41.962 --rc geninfo_all_blocks=1 00:04:41.962 --rc geninfo_unexecuted_blocks=1 00:04:41.962 00:04:41.962 ' 00:04:41.962 09:54:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:41.962 OK 00:04:41.962 09:54:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:41.962 00:04:41.962 real 0m0.205s 00:04:41.962 user 0m0.123s 00:04:41.962 sys 0m0.088s 00:04:41.962 ************************************ 00:04:41.962 END TEST rpc_client 00:04:41.962 ************************************ 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.962 09:54:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:41.962 09:54:14 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.962 09:54:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.962 09:54:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.962 09:54:14 -- common/autotest_common.sh@10 -- # set +x 00:04:41.962 ************************************ 00:04:41.962 START TEST json_config 00:04:41.962 ************************************ 00:04:41.962 09:54:14 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:41.963 09:54:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.963 09:54:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.963 09:54:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.222 09:54:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.222 09:54:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.222 09:54:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.222 09:54:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.222 09:54:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.222 09:54:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.222 09:54:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.222 09:54:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.222 09:54:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.222 09:54:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.222 09:54:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:42.222 09:54:14 json_config -- scripts/common.sh@345 -- # : 1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.222 09:54:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.222 09:54:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@353 -- # local d=1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.222 09:54:14 json_config -- scripts/common.sh@355 -- # echo 1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.222 09:54:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:42.223 09:54:14 json_config -- scripts/common.sh@353 -- # local d=2 00:04:42.223 09:54:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.223 09:54:14 json_config -- scripts/common.sh@355 -- # echo 2 00:04:42.223 09:54:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.223 09:54:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.223 09:54:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.223 09:54:14 json_config -- scripts/common.sh@368 -- # return 0 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.223 --rc genhtml_branch_coverage=1 00:04:42.223 --rc genhtml_function_coverage=1 00:04:42.223 --rc genhtml_legend=1 00:04:42.223 --rc geninfo_all_blocks=1 00:04:42.223 --rc geninfo_unexecuted_blocks=1 00:04:42.223 00:04:42.223 ' 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.223 --rc genhtml_branch_coverage=1 00:04:42.223 --rc genhtml_function_coverage=1 00:04:42.223 --rc genhtml_legend=1 00:04:42.223 --rc geninfo_all_blocks=1 00:04:42.223 --rc geninfo_unexecuted_blocks=1 00:04:42.223 00:04:42.223 ' 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.223 --rc genhtml_branch_coverage=1 00:04:42.223 --rc genhtml_function_coverage=1 00:04:42.223 --rc genhtml_legend=1 00:04:42.223 --rc geninfo_all_blocks=1 00:04:42.223 --rc geninfo_unexecuted_blocks=1 00:04:42.223 00:04:42.223 ' 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.223 --rc genhtml_branch_coverage=1 00:04:42.223 --rc genhtml_function_coverage=1 00:04:42.223 --rc genhtml_legend=1 00:04:42.223 --rc geninfo_all_blocks=1 00:04:42.223 --rc geninfo_unexecuted_blocks=1 00:04:42.223 00:04:42.223 ' 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.223 09:54:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.223 09:54:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.223 09:54:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.223 09:54:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.223 09:54:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.223 09:54:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.223 09:54:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.223 09:54:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:42.223 09:54:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@51 -- # : 0 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:42.223 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:42.223 09:54:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:42.223 INFO: JSON configuration test init 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 Waiting for target to run... 00:04:42.223 09:54:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:42.223 09:54:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:42.223 09:54:14 json_config -- json_config/common.sh@10 -- # shift 00:04:42.223 09:54:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.223 09:54:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.223 09:54:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.223 09:54:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.223 09:54:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.223 09:54:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57270 00:04:42.223 09:54:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:42.223 09:54:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.223 09:54:14 json_config -- json_config/common.sh@25 -- # waitforlisten 57270 /var/tmp/spdk_tgt.sock 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@833 -- # '[' -z 57270 ']' 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.223 09:54:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.223 [2024-11-04 09:54:14.289857] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:42.223 [2024-11-04 09:54:14.290107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57270 ] 00:04:42.791 [2024-11-04 09:54:14.740777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.791 [2024-11-04 09:54:14.790771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.359 09:54:15 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.359 09:54:15 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:43.359 09:54:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:43.359 00:04:43.359 09:54:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:43.359 09:54:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:43.359 09:54:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.359 09:54:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.359 09:54:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:43.359 09:54:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:43.359 09:54:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.359 09:54:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.359 09:54:15 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:43.359 09:54:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:43.359 09:54:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:43.618 [2024-11-04 09:54:15.663092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:43.877 09:54:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.877 09:54:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:43.877 09:54:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:43.877 09:54:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@54 -- # sort 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:44.136 09:54:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.136 09:54:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:44.136 09:54:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.136 09:54:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:44.136 09:54:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.136 09:54:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:44.395 MallocForNvmf0 00:04:44.395 09:54:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.395 09:54:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:44.655 MallocForNvmf1 00:04:44.655 09:54:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:44.655 09:54:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.233 [2024-11-04 09:54:17.086512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.233 09:54:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.233 09:54:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:45.518 09:54:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.518 09:54:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:45.518 09:54:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.518 09:54:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:45.777 09:54:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:45.777 09:54:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.036 [2024-11-04 09:54:18.155136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:46.036 09:54:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:46.036 09:54:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.036 09:54:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.295 09:54:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:46.295 09:54:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.295 09:54:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.295 09:54:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:46.295 09:54:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.295 09:54:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:46.554 MallocBdevForConfigChangeCheck 00:04:46.554 09:54:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:46.554 09:54:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.554 09:54:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.554 09:54:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:46.554 09:54:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.813 INFO: shutting down applications... 00:04:46.813 09:54:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:46.813 09:54:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:46.813 09:54:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:46.813 09:54:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:46.813 09:54:18 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:47.380 Calling clear_iscsi_subsystem 00:04:47.380 Calling clear_nvmf_subsystem 00:04:47.380 Calling clear_nbd_subsystem 00:04:47.380 Calling clear_ublk_subsystem 00:04:47.380 Calling clear_vhost_blk_subsystem 00:04:47.380 Calling clear_vhost_scsi_subsystem 00:04:47.380 Calling clear_bdev_subsystem 00:04:47.380 09:54:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:47.380 09:54:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:47.380 09:54:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:47.380 09:54:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.380 09:54:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:47.380 09:54:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:47.638 09:54:19 json_config -- json_config/json_config.sh@352 -- # break 00:04:47.638 09:54:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:47.638 09:54:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:47.638 09:54:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:47.638 09:54:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.638 09:54:19 json_config -- json_config/common.sh@35 -- # [[ -n 57270 ]] 00:04:47.638 09:54:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57270 00:04:47.638 09:54:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.638 09:54:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.638 09:54:19 json_config -- json_config/common.sh@41 -- # kill -0 57270 00:04:47.638 09:54:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.205 09:54:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.205 09:54:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.205 09:54:20 json_config -- json_config/common.sh@41 -- # kill -0 57270 00:04:48.205 09:54:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.205 09:54:20 json_config -- json_config/common.sh@43 -- # break 00:04:48.205 09:54:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.205 SPDK target shutdown done 00:04:48.205 09:54:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.205 INFO: relaunching applications... 00:04:48.205 09:54:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:48.205 09:54:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.205 09:54:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:48.205 09:54:20 json_config -- json_config/common.sh@10 -- # shift 00:04:48.205 09:54:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.205 09:54:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.205 09:54:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.205 09:54:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.205 09:54:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.205 Waiting for target to run... 00:04:48.205 09:54:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57471 00:04:48.205 09:54:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.205 09:54:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.205 09:54:20 json_config -- json_config/common.sh@25 -- # waitforlisten 57471 /var/tmp/spdk_tgt.sock 00:04:48.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.205 09:54:20 json_config -- common/autotest_common.sh@833 -- # '[' -z 57471 ']' 00:04:48.205 09:54:20 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.205 09:54:20 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.205 09:54:20 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.205 09:54:20 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.205 09:54:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.205 [2024-11-04 09:54:20.278104] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:48.205 [2024-11-04 09:54:20.278188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57471 ] 00:04:48.773 [2024-11-04 09:54:20.693941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.773 [2024-11-04 09:54:20.738162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.773 [2024-11-04 09:54:20.876572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.031 [2024-11-04 09:54:21.093826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.031 [2024-11-04 09:54:21.125981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:49.290 00:04:49.290 INFO: Checking if target configuration is the same... 00:04:49.290 09:54:21 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.290 09:54:21 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:49.290 09:54:21 json_config -- json_config/common.sh@26 -- # echo '' 00:04:49.290 09:54:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:49.290 09:54:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:49.290 09:54:21 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.290 09:54:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:49.290 09:54:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.290 + '[' 2 -ne 2 ']' 00:04:49.290 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:49.290 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:49.290 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.290 +++ basename /dev/fd/62 00:04:49.290 ++ mktemp /tmp/62.XXX 00:04:49.290 + tmp_file_1=/tmp/62.NpE 00:04:49.290 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.290 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:49.290 + tmp_file_2=/tmp/spdk_tgt_config.json.Ces 00:04:49.290 + ret=0 00:04:49.290 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.858 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.858 + diff -u /tmp/62.NpE /tmp/spdk_tgt_config.json.Ces 00:04:49.858 INFO: JSON config files are the same 00:04:49.858 + echo 'INFO: JSON config files are the same' 00:04:49.858 + rm /tmp/62.NpE /tmp/spdk_tgt_config.json.Ces 00:04:49.858 + exit 0 00:04:49.858 INFO: changing configuration and checking if this can be detected... 00:04:49.858 09:54:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:49.858 09:54:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:49.858 09:54:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.858 09:54:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.117 09:54:22 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.117 09:54:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:50.117 09:54:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.117 + '[' 2 -ne 2 ']' 00:04:50.117 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:50.117 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:50.117 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:50.117 +++ basename /dev/fd/62 00:04:50.117 ++ mktemp /tmp/62.XXX 00:04:50.117 + tmp_file_1=/tmp/62.eM8 00:04:50.117 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.117 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.117 + tmp_file_2=/tmp/spdk_tgt_config.json.V3a 00:04:50.117 + ret=0 00:04:50.117 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.683 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.683 + diff -u /tmp/62.eM8 /tmp/spdk_tgt_config.json.V3a 00:04:50.683 + ret=1 00:04:50.683 + echo '=== Start of file: /tmp/62.eM8 ===' 00:04:50.683 + cat /tmp/62.eM8 00:04:50.683 + echo '=== End of file: /tmp/62.eM8 ===' 00:04:50.683 + echo '' 00:04:50.683 + echo '=== Start of file: /tmp/spdk_tgt_config.json.V3a ===' 00:04:50.683 + cat /tmp/spdk_tgt_config.json.V3a 00:04:50.683 + echo '=== End of file: /tmp/spdk_tgt_config.json.V3a ===' 00:04:50.683 + echo '' 00:04:50.683 + rm /tmp/62.eM8 /tmp/spdk_tgt_config.json.V3a 00:04:50.683 + exit 1 00:04:50.683 INFO: configuration change detected. 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 57471 ]] 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.683 09:54:22 json_config -- json_config/json_config.sh@330 -- # killprocess 57471 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@952 -- # '[' -z 57471 ']' 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@956 -- # kill -0 57471 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@957 -- # uname 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57471 00:04:50.683 killing process with pid 57471 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57471' 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@971 -- # kill 57471 00:04:50.683 09:54:22 json_config -- common/autotest_common.sh@976 -- # wait 57471 00:04:50.941 09:54:23 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.941 09:54:23 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:50.941 09:54:23 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.941 09:54:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.941 INFO: Success 00:04:50.941 09:54:23 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:50.941 09:54:23 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:50.941 ************************************ 00:04:50.941 END TEST json_config 00:04:50.941 ************************************ 00:04:50.941 00:04:50.941 real 0m9.020s 00:04:50.941 user 0m13.026s 00:04:50.941 sys 0m1.816s 00:04:50.941 09:54:23 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:50.941 09:54:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.941 09:54:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:50.941 09:54:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.941 09:54:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.941 09:54:23 -- common/autotest_common.sh@10 -- # set +x 00:04:50.941 ************************************ 00:04:50.941 START TEST json_config_extra_key 00:04:50.941 ************************************ 00:04:50.941 09:54:23 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.200 --rc genhtml_branch_coverage=1 00:04:51.200 --rc genhtml_function_coverage=1 00:04:51.200 --rc genhtml_legend=1 00:04:51.200 --rc geninfo_all_blocks=1 00:04:51.200 --rc geninfo_unexecuted_blocks=1 00:04:51.200 00:04:51.200 ' 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.200 --rc genhtml_branch_coverage=1 00:04:51.200 --rc genhtml_function_coverage=1 00:04:51.200 --rc genhtml_legend=1 00:04:51.200 --rc geninfo_all_blocks=1 00:04:51.200 --rc geninfo_unexecuted_blocks=1 00:04:51.200 00:04:51.200 ' 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.200 --rc genhtml_branch_coverage=1 00:04:51.200 --rc genhtml_function_coverage=1 00:04:51.200 --rc genhtml_legend=1 00:04:51.200 --rc geninfo_all_blocks=1 00:04:51.200 --rc geninfo_unexecuted_blocks=1 00:04:51.200 00:04:51.200 ' 00:04:51.200 09:54:23 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.200 --rc genhtml_branch_coverage=1 00:04:51.200 --rc genhtml_function_coverage=1 00:04:51.200 --rc genhtml_legend=1 00:04:51.200 --rc geninfo_all_blocks=1 00:04:51.200 --rc geninfo_unexecuted_blocks=1 00:04:51.200 00:04:51.200 ' 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.200 09:54:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.200 09:54:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.200 09:54:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.200 09:54:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.200 09:54:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.200 09:54:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.200 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.200 09:54:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.200 INFO: launching applications... 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:51.200 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.201 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.201 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.201 09:54:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57625 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:51.201 Waiting for target to run... 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.201 09:54:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57625 /var/tmp/spdk_tgt.sock 00:04:51.201 09:54:23 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57625 ']' 00:04:51.201 09:54:23 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.201 09:54:23 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.201 09:54:23 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.201 09:54:23 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.201 09:54:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.201 [2024-11-04 09:54:23.366254] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:51.201 [2024-11-04 09:54:23.366634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:04:51.768 [2024-11-04 09:54:23.812363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.768 [2024-11-04 09:54:23.874318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.768 [2024-11-04 09:54:23.910070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:52.336 00:04:52.336 INFO: shutting down applications... 00:04:52.336 09:54:24 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.336 09:54:24 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:52.336 09:54:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:52.336 09:54:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57625 ]] 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57625 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57625 00:04:52.336 09:54:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57625 00:04:52.903 SPDK target shutdown done 00:04:52.903 Success 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.903 09:54:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.903 09:54:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:52.903 00:04:52.903 real 0m1.825s 00:04:52.903 user 0m1.756s 00:04:52.903 sys 0m0.469s 00:04:52.903 ************************************ 00:04:52.903 END TEST json_config_extra_key 00:04:52.903 ************************************ 00:04:52.903 09:54:24 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.903 09:54:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.903 09:54:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.903 09:54:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.903 09:54:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.903 09:54:24 -- common/autotest_common.sh@10 -- # set +x 00:04:52.903 ************************************ 00:04:52.903 START TEST alias_rpc 00:04:52.903 ************************************ 00:04:52.903 09:54:24 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.903 * Looking for test storage... 00:04:52.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:52.903 09:54:25 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.903 09:54:25 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.903 09:54:25 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.162 09:54:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.162 --rc genhtml_branch_coverage=1 00:04:53.162 --rc genhtml_function_coverage=1 00:04:53.162 --rc genhtml_legend=1 00:04:53.162 --rc geninfo_all_blocks=1 00:04:53.162 --rc geninfo_unexecuted_blocks=1 00:04:53.162 00:04:53.162 ' 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.162 --rc genhtml_branch_coverage=1 00:04:53.162 --rc genhtml_function_coverage=1 00:04:53.162 --rc genhtml_legend=1 00:04:53.162 --rc geninfo_all_blocks=1 00:04:53.162 --rc geninfo_unexecuted_blocks=1 00:04:53.162 00:04:53.162 ' 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.162 --rc genhtml_branch_coverage=1 00:04:53.162 --rc genhtml_function_coverage=1 00:04:53.162 --rc genhtml_legend=1 00:04:53.162 --rc geninfo_all_blocks=1 00:04:53.162 --rc geninfo_unexecuted_blocks=1 00:04:53.162 00:04:53.162 ' 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.162 --rc genhtml_branch_coverage=1 00:04:53.162 --rc genhtml_function_coverage=1 00:04:53.162 --rc genhtml_legend=1 00:04:53.162 --rc geninfo_all_blocks=1 00:04:53.162 --rc geninfo_unexecuted_blocks=1 00:04:53.162 00:04:53.162 ' 00:04:53.162 09:54:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.162 09:54:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57698 00:04:53.162 09:54:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.162 09:54:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57698 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57698 ']' 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.162 09:54:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.162 [2024-11-04 09:54:25.243556] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:53.162 [2024-11-04 09:54:25.244186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57698 ] 00:04:53.447 [2024-11-04 09:54:25.395245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.447 [2024-11-04 09:54:25.457036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.447 [2024-11-04 09:54:25.533947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:53.706 09:54:25 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.706 09:54:25 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:53.706 09:54:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.964 09:54:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57698 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57698 ']' 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57698 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57698 00:04:53.964 killing process with pid 57698 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57698' 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@971 -- # kill 57698 00:04:53.964 09:54:26 alias_rpc -- common/autotest_common.sh@976 -- # wait 57698 00:04:54.531 ************************************ 00:04:54.531 END TEST alias_rpc 00:04:54.531 ************************************ 00:04:54.531 00:04:54.531 real 0m1.504s 00:04:54.531 user 0m1.613s 00:04:54.531 sys 0m0.420s 00:04:54.531 09:54:26 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.531 09:54:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.531 09:54:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.531 09:54:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.531 09:54:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.531 09:54:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.531 09:54:26 -- common/autotest_common.sh@10 -- # set +x 00:04:54.531 ************************************ 00:04:54.531 START TEST spdkcli_tcp 00:04:54.531 ************************************ 00:04:54.531 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.531 * Looking for test storage... 00:04:54.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.531 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.531 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.531 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.790 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.790 09:54:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.791 09:54:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.791 --rc genhtml_branch_coverage=1 00:04:54.791 --rc genhtml_function_coverage=1 00:04:54.791 --rc genhtml_legend=1 00:04:54.791 --rc geninfo_all_blocks=1 00:04:54.791 --rc geninfo_unexecuted_blocks=1 00:04:54.791 00:04:54.791 ' 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.791 --rc genhtml_branch_coverage=1 00:04:54.791 --rc genhtml_function_coverage=1 00:04:54.791 --rc genhtml_legend=1 00:04:54.791 --rc geninfo_all_blocks=1 00:04:54.791 --rc geninfo_unexecuted_blocks=1 00:04:54.791 00:04:54.791 ' 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.791 --rc genhtml_branch_coverage=1 00:04:54.791 --rc genhtml_function_coverage=1 00:04:54.791 --rc genhtml_legend=1 00:04:54.791 --rc geninfo_all_blocks=1 00:04:54.791 --rc geninfo_unexecuted_blocks=1 00:04:54.791 00:04:54.791 ' 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.791 --rc genhtml_branch_coverage=1 00:04:54.791 --rc genhtml_function_coverage=1 00:04:54.791 --rc genhtml_legend=1 00:04:54.791 --rc geninfo_all_blocks=1 00:04:54.791 --rc geninfo_unexecuted_blocks=1 00:04:54.791 00:04:54.791 ' 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57780 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.791 09:54:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57780 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57780 ']' 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.791 09:54:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.791 [2024-11-04 09:54:26.787824] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:54.791 [2024-11-04 09:54:26.788157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:04:54.791 [2024-11-04 09:54:26.930876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.049 [2024-11-04 09:54:26.994441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.049 [2024-11-04 09:54:26.994450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.050 [2024-11-04 09:54:27.067939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.308 09:54:27 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.308 09:54:27 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:55.308 09:54:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57784 00:04:55.308 09:54:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.308 09:54:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.567 [ 00:04:55.567 "bdev_malloc_delete", 00:04:55.567 "bdev_malloc_create", 00:04:55.567 "bdev_null_resize", 00:04:55.567 "bdev_null_delete", 00:04:55.567 "bdev_null_create", 00:04:55.567 "bdev_nvme_cuse_unregister", 00:04:55.567 "bdev_nvme_cuse_register", 00:04:55.567 "bdev_opal_new_user", 00:04:55.567 "bdev_opal_set_lock_state", 00:04:55.567 "bdev_opal_delete", 00:04:55.567 "bdev_opal_get_info", 00:04:55.567 "bdev_opal_create", 00:04:55.567 "bdev_nvme_opal_revert", 00:04:55.567 "bdev_nvme_opal_init", 00:04:55.567 "bdev_nvme_send_cmd", 00:04:55.567 "bdev_nvme_set_keys", 00:04:55.567 "bdev_nvme_get_path_iostat", 00:04:55.567 "bdev_nvme_get_mdns_discovery_info", 00:04:55.567 "bdev_nvme_stop_mdns_discovery", 00:04:55.567 "bdev_nvme_start_mdns_discovery", 00:04:55.567 "bdev_nvme_set_multipath_policy", 00:04:55.567 "bdev_nvme_set_preferred_path", 00:04:55.567 "bdev_nvme_get_io_paths", 00:04:55.567 "bdev_nvme_remove_error_injection", 00:04:55.567 "bdev_nvme_add_error_injection", 00:04:55.567 "bdev_nvme_get_discovery_info", 00:04:55.567 "bdev_nvme_stop_discovery", 00:04:55.567 "bdev_nvme_start_discovery", 00:04:55.567 "bdev_nvme_get_controller_health_info", 00:04:55.567 "bdev_nvme_disable_controller", 00:04:55.567 "bdev_nvme_enable_controller", 00:04:55.567 "bdev_nvme_reset_controller", 00:04:55.567 "bdev_nvme_get_transport_statistics", 00:04:55.567 "bdev_nvme_apply_firmware", 00:04:55.567 "bdev_nvme_detach_controller", 00:04:55.567 "bdev_nvme_get_controllers", 00:04:55.567 "bdev_nvme_attach_controller", 00:04:55.567 "bdev_nvme_set_hotplug", 00:04:55.567 "bdev_nvme_set_options", 00:04:55.567 "bdev_passthru_delete", 00:04:55.567 "bdev_passthru_create", 00:04:55.567 "bdev_lvol_set_parent_bdev", 00:04:55.567 "bdev_lvol_set_parent", 00:04:55.567 "bdev_lvol_check_shallow_copy", 00:04:55.567 "bdev_lvol_start_shallow_copy", 00:04:55.567 "bdev_lvol_grow_lvstore", 00:04:55.567 "bdev_lvol_get_lvols", 00:04:55.567 "bdev_lvol_get_lvstores", 00:04:55.567 "bdev_lvol_delete", 00:04:55.567 "bdev_lvol_set_read_only", 00:04:55.567 "bdev_lvol_resize", 00:04:55.567 "bdev_lvol_decouple_parent", 00:04:55.567 "bdev_lvol_inflate", 00:04:55.567 "bdev_lvol_rename", 00:04:55.567 "bdev_lvol_clone_bdev", 00:04:55.567 "bdev_lvol_clone", 00:04:55.567 "bdev_lvol_snapshot", 00:04:55.567 "bdev_lvol_create", 00:04:55.567 "bdev_lvol_delete_lvstore", 00:04:55.567 "bdev_lvol_rename_lvstore", 00:04:55.567 "bdev_lvol_create_lvstore", 00:04:55.567 "bdev_raid_set_options", 00:04:55.567 "bdev_raid_remove_base_bdev", 00:04:55.567 "bdev_raid_add_base_bdev", 00:04:55.567 "bdev_raid_delete", 00:04:55.567 "bdev_raid_create", 00:04:55.567 "bdev_raid_get_bdevs", 00:04:55.567 "bdev_error_inject_error", 00:04:55.567 "bdev_error_delete", 00:04:55.567 "bdev_error_create", 00:04:55.567 "bdev_split_delete", 00:04:55.567 "bdev_split_create", 00:04:55.567 "bdev_delay_delete", 00:04:55.567 "bdev_delay_create", 00:04:55.567 "bdev_delay_update_latency", 00:04:55.567 "bdev_zone_block_delete", 00:04:55.567 "bdev_zone_block_create", 00:04:55.567 "blobfs_create", 00:04:55.567 "blobfs_detect", 00:04:55.567 "blobfs_set_cache_size", 00:04:55.567 "bdev_aio_delete", 00:04:55.567 "bdev_aio_rescan", 00:04:55.567 "bdev_aio_create", 00:04:55.567 "bdev_ftl_set_property", 00:04:55.567 "bdev_ftl_get_properties", 00:04:55.567 "bdev_ftl_get_stats", 00:04:55.567 "bdev_ftl_unmap", 00:04:55.567 "bdev_ftl_unload", 00:04:55.567 "bdev_ftl_delete", 00:04:55.567 "bdev_ftl_load", 00:04:55.567 "bdev_ftl_create", 00:04:55.567 "bdev_virtio_attach_controller", 00:04:55.567 "bdev_virtio_scsi_get_devices", 00:04:55.567 "bdev_virtio_detach_controller", 00:04:55.567 "bdev_virtio_blk_set_hotplug", 00:04:55.567 "bdev_iscsi_delete", 00:04:55.567 "bdev_iscsi_create", 00:04:55.567 "bdev_iscsi_set_options", 00:04:55.567 "bdev_uring_delete", 00:04:55.567 "bdev_uring_rescan", 00:04:55.567 "bdev_uring_create", 00:04:55.567 "accel_error_inject_error", 00:04:55.567 "ioat_scan_accel_module", 00:04:55.567 "dsa_scan_accel_module", 00:04:55.567 "iaa_scan_accel_module", 00:04:55.567 "keyring_file_remove_key", 00:04:55.567 "keyring_file_add_key", 00:04:55.567 "keyring_linux_set_options", 00:04:55.567 "fsdev_aio_delete", 00:04:55.567 "fsdev_aio_create", 00:04:55.567 "iscsi_get_histogram", 00:04:55.567 "iscsi_enable_histogram", 00:04:55.567 "iscsi_set_options", 00:04:55.567 "iscsi_get_auth_groups", 00:04:55.568 "iscsi_auth_group_remove_secret", 00:04:55.568 "iscsi_auth_group_add_secret", 00:04:55.568 "iscsi_delete_auth_group", 00:04:55.568 "iscsi_create_auth_group", 00:04:55.568 "iscsi_set_discovery_auth", 00:04:55.568 "iscsi_get_options", 00:04:55.568 "iscsi_target_node_request_logout", 00:04:55.568 "iscsi_target_node_set_redirect", 00:04:55.568 "iscsi_target_node_set_auth", 00:04:55.568 "iscsi_target_node_add_lun", 00:04:55.568 "iscsi_get_stats", 00:04:55.568 "iscsi_get_connections", 00:04:55.568 "iscsi_portal_group_set_auth", 00:04:55.568 "iscsi_start_portal_group", 00:04:55.568 "iscsi_delete_portal_group", 00:04:55.568 "iscsi_create_portal_group", 00:04:55.568 "iscsi_get_portal_groups", 00:04:55.568 "iscsi_delete_target_node", 00:04:55.568 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.568 "iscsi_target_node_add_pg_ig_maps", 00:04:55.568 "iscsi_create_target_node", 00:04:55.568 "iscsi_get_target_nodes", 00:04:55.568 "iscsi_delete_initiator_group", 00:04:55.568 "iscsi_initiator_group_remove_initiators", 00:04:55.568 "iscsi_initiator_group_add_initiators", 00:04:55.568 "iscsi_create_initiator_group", 00:04:55.568 "iscsi_get_initiator_groups", 00:04:55.568 "nvmf_set_crdt", 00:04:55.568 "nvmf_set_config", 00:04:55.568 "nvmf_set_max_subsystems", 00:04:55.568 "nvmf_stop_mdns_prr", 00:04:55.568 "nvmf_publish_mdns_prr", 00:04:55.568 "nvmf_subsystem_get_listeners", 00:04:55.568 "nvmf_subsystem_get_qpairs", 00:04:55.568 "nvmf_subsystem_get_controllers", 00:04:55.568 "nvmf_get_stats", 00:04:55.568 "nvmf_get_transports", 00:04:55.568 "nvmf_create_transport", 00:04:55.568 "nvmf_get_targets", 00:04:55.568 "nvmf_delete_target", 00:04:55.568 "nvmf_create_target", 00:04:55.568 "nvmf_subsystem_allow_any_host", 00:04:55.568 "nvmf_subsystem_set_keys", 00:04:55.568 "nvmf_subsystem_remove_host", 00:04:55.568 "nvmf_subsystem_add_host", 00:04:55.568 "nvmf_ns_remove_host", 00:04:55.568 "nvmf_ns_add_host", 00:04:55.568 "nvmf_subsystem_remove_ns", 00:04:55.568 "nvmf_subsystem_set_ns_ana_group", 00:04:55.568 "nvmf_subsystem_add_ns", 00:04:55.568 "nvmf_subsystem_listener_set_ana_state", 00:04:55.568 "nvmf_discovery_get_referrals", 00:04:55.568 "nvmf_discovery_remove_referral", 00:04:55.568 "nvmf_discovery_add_referral", 00:04:55.568 "nvmf_subsystem_remove_listener", 00:04:55.568 "nvmf_subsystem_add_listener", 00:04:55.568 "nvmf_delete_subsystem", 00:04:55.568 "nvmf_create_subsystem", 00:04:55.568 "nvmf_get_subsystems", 00:04:55.568 "env_dpdk_get_mem_stats", 00:04:55.568 "nbd_get_disks", 00:04:55.568 "nbd_stop_disk", 00:04:55.568 "nbd_start_disk", 00:04:55.568 "ublk_recover_disk", 00:04:55.568 "ublk_get_disks", 00:04:55.568 "ublk_stop_disk", 00:04:55.568 "ublk_start_disk", 00:04:55.568 "ublk_destroy_target", 00:04:55.568 "ublk_create_target", 00:04:55.568 "virtio_blk_create_transport", 00:04:55.568 "virtio_blk_get_transports", 00:04:55.568 "vhost_controller_set_coalescing", 00:04:55.568 "vhost_get_controllers", 00:04:55.568 "vhost_delete_controller", 00:04:55.568 "vhost_create_blk_controller", 00:04:55.568 "vhost_scsi_controller_remove_target", 00:04:55.568 "vhost_scsi_controller_add_target", 00:04:55.568 "vhost_start_scsi_controller", 00:04:55.568 "vhost_create_scsi_controller", 00:04:55.568 "thread_set_cpumask", 00:04:55.568 "scheduler_set_options", 00:04:55.568 "framework_get_governor", 00:04:55.568 "framework_get_scheduler", 00:04:55.568 "framework_set_scheduler", 00:04:55.568 "framework_get_reactors", 00:04:55.568 "thread_get_io_channels", 00:04:55.568 "thread_get_pollers", 00:04:55.568 "thread_get_stats", 00:04:55.568 "framework_monitor_context_switch", 00:04:55.568 "spdk_kill_instance", 00:04:55.568 "log_enable_timestamps", 00:04:55.568 "log_get_flags", 00:04:55.568 "log_clear_flag", 00:04:55.568 "log_set_flag", 00:04:55.568 "log_get_level", 00:04:55.568 "log_set_level", 00:04:55.568 "log_get_print_level", 00:04:55.568 "log_set_print_level", 00:04:55.568 "framework_enable_cpumask_locks", 00:04:55.568 "framework_disable_cpumask_locks", 00:04:55.568 "framework_wait_init", 00:04:55.568 "framework_start_init", 00:04:55.568 "scsi_get_devices", 00:04:55.568 "bdev_get_histogram", 00:04:55.568 "bdev_enable_histogram", 00:04:55.568 "bdev_set_qos_limit", 00:04:55.568 "bdev_set_qd_sampling_period", 00:04:55.568 "bdev_get_bdevs", 00:04:55.568 "bdev_reset_iostat", 00:04:55.568 "bdev_get_iostat", 00:04:55.568 "bdev_examine", 00:04:55.568 "bdev_wait_for_examine", 00:04:55.568 "bdev_set_options", 00:04:55.568 "accel_get_stats", 00:04:55.568 "accel_set_options", 00:04:55.568 "accel_set_driver", 00:04:55.568 "accel_crypto_key_destroy", 00:04:55.568 "accel_crypto_keys_get", 00:04:55.568 "accel_crypto_key_create", 00:04:55.568 "accel_assign_opc", 00:04:55.568 "accel_get_module_info", 00:04:55.568 "accel_get_opc_assignments", 00:04:55.568 "vmd_rescan", 00:04:55.568 "vmd_remove_device", 00:04:55.568 "vmd_enable", 00:04:55.568 "sock_get_default_impl", 00:04:55.568 "sock_set_default_impl", 00:04:55.568 "sock_impl_set_options", 00:04:55.568 "sock_impl_get_options", 00:04:55.568 "iobuf_get_stats", 00:04:55.568 "iobuf_set_options", 00:04:55.568 "keyring_get_keys", 00:04:55.568 "framework_get_pci_devices", 00:04:55.568 "framework_get_config", 00:04:55.568 "framework_get_subsystems", 00:04:55.568 "fsdev_set_opts", 00:04:55.568 "fsdev_get_opts", 00:04:55.568 "trace_get_info", 00:04:55.568 "trace_get_tpoint_group_mask", 00:04:55.568 "trace_disable_tpoint_group", 00:04:55.568 "trace_enable_tpoint_group", 00:04:55.568 "trace_clear_tpoint_mask", 00:04:55.568 "trace_set_tpoint_mask", 00:04:55.568 "notify_get_notifications", 00:04:55.568 "notify_get_types", 00:04:55.568 "spdk_get_version", 00:04:55.568 "rpc_get_methods" 00:04:55.568 ] 00:04:55.568 09:54:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.568 09:54:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.568 09:54:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57780 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57780 ']' 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57780 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57780 00:04:55.568 killing process with pid 57780 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57780' 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57780 00:04:55.568 09:54:27 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57780 00:04:55.827 ************************************ 00:04:55.827 END TEST spdkcli_tcp 00:04:55.827 ************************************ 00:04:55.827 00:04:55.827 real 0m1.461s 00:04:55.827 user 0m2.466s 00:04:55.827 sys 0m0.468s 00:04:55.827 09:54:27 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.827 09:54:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.086 09:54:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.086 09:54:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.086 09:54:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.086 09:54:28 -- common/autotest_common.sh@10 -- # set +x 00:04:56.086 ************************************ 00:04:56.086 START TEST dpdk_mem_utility 00:04:56.086 ************************************ 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.086 * Looking for test storage... 00:04:56.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:56.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.086 09:54:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.086 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57866 00:04:56.086 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57866 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57866 ']' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.086 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.086 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.345 [2024-11-04 09:54:28.313288] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:56.345 [2024-11-04 09:54:28.313698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57866 ] 00:04:56.345 [2024-11-04 09:54:28.462319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.604 [2024-11-04 09:54:28.528489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.604 [2024-11-04 09:54:28.602732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.864 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.864 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:56.864 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.864 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.864 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.864 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.864 { 00:04:56.864 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.864 } 00:04:56.864 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.864 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.864 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:56.864 1 heaps totaling size 810.000000 MiB 00:04:56.864 size: 810.000000 MiB heap id: 0 00:04:56.864 end heaps---------- 00:04:56.864 9 mempools totaling size 595.772034 MiB 00:04:56.864 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.864 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.864 size: 92.545471 MiB name: bdev_io_57866 00:04:56.864 size: 50.003479 MiB name: msgpool_57866 00:04:56.864 size: 36.509338 MiB name: fsdev_io_57866 00:04:56.864 size: 21.763794 MiB name: PDU_Pool 00:04:56.864 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.864 size: 4.133484 MiB name: evtpool_57866 00:04:56.864 size: 0.026123 MiB name: Session_Pool 00:04:56.864 end mempools------- 00:04:56.864 6 memzones totaling size 4.142822 MiB 00:04:56.864 size: 1.000366 MiB name: RG_ring_0_57866 00:04:56.864 size: 1.000366 MiB name: RG_ring_1_57866 00:04:56.864 size: 1.000366 MiB name: RG_ring_4_57866 00:04:56.864 size: 1.000366 MiB name: RG_ring_5_57866 00:04:56.864 size: 0.125366 MiB name: RG_ring_2_57866 00:04:56.864 size: 0.015991 MiB name: RG_ring_3_57866 00:04:56.864 end memzones------- 00:04:56.864 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.864 heap id: 0 total size: 810.000000 MiB number of busy elements: 315 number of free elements: 15 00:04:56.864 list of free elements. size: 10.812866 MiB 00:04:56.864 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:56.864 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:56.864 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:56.864 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:56.864 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:56.864 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:56.864 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:56.864 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:56.864 element at address: 0x20001a600000 with size: 0.567322 MiB 00:04:56.864 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:56.864 element at address: 0x200000c00000 with size: 0.487000 MiB 00:04:56.864 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:56.864 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:56.864 element at address: 0x200027a00000 with size: 0.395752 MiB 00:04:56.864 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:56.864 list of standard malloc elements. size: 199.268250 MiB 00:04:56.864 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:56.864 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:56.864 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:56.864 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:56.864 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:56.864 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:56.864 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:56.865 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:56.865 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:56.865 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691480 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691540 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691600 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691780 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691840 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691900 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:04:56.865 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692080 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692140 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692200 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692380 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692440 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692500 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692680 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692740 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692800 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692980 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693040 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693100 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693280 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693340 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693400 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693580 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693640 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693700 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693880 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693940 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694000 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694180 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694240 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694300 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694480 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694540 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694600 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694780 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694840 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694900 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a695080 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a695140 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a695200 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:56.866 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a65500 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:56.866 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:56.866 list of memzone associated elements. size: 599.918884 MiB 00:04:56.866 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:56.867 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.867 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:56.867 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.867 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:56.867 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57866_0 00:04:56.867 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:56.867 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57866_0 00:04:56.867 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:56.867 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57866_0 00:04:56.867 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:56.867 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.867 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:56.867 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.867 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:56.867 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57866_0 00:04:56.867 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:56.867 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57866 00:04:56.867 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:56.867 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57866 00:04:56.867 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:56.867 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.867 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:56.867 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.867 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:56.867 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.867 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:56.867 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.867 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:56.867 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57866 00:04:56.867 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:56.867 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57866 00:04:56.867 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:56.867 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57866 00:04:56.867 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:56.867 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57866 00:04:56.867 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:56.867 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57866 00:04:56.867 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:56.867 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57866 00:04:56.867 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:56.867 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.867 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:56.867 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.867 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:56.867 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.867 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:56.867 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57866 00:04:56.867 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:56.867 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57866 00:04:56.867 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:56.867 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.867 element at address: 0x200027a65680 with size: 0.023743 MiB 00:04:56.867 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.867 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:56.867 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57866 00:04:56.867 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:04:56.867 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.867 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:56.867 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57866 00:04:56.867 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:56.867 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57866 00:04:56.867 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:56.867 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57866 00:04:56.867 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:04:56.867 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.867 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.867 09:54:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57866 00:04:56.867 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57866 ']' 00:04:56.867 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57866 00:04:56.867 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:56.867 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.867 09:54:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57866 00:04:56.867 killing process with pid 57866 00:04:56.867 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.867 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.867 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57866' 00:04:56.867 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57866 00:04:56.867 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57866 00:04:57.467 ************************************ 00:04:57.467 END TEST dpdk_mem_utility 00:04:57.467 ************************************ 00:04:57.467 00:04:57.467 real 0m1.349s 00:04:57.467 user 0m1.316s 00:04:57.467 sys 0m0.434s 00:04:57.467 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.467 09:54:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.467 09:54:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.467 09:54:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.467 09:54:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.467 09:54:29 -- common/autotest_common.sh@10 -- # set +x 00:04:57.467 ************************************ 00:04:57.467 START TEST event 00:04:57.467 ************************************ 00:04:57.467 09:54:29 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.467 * Looking for test storage... 00:04:57.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.467 09:54:29 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.467 09:54:29 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.467 09:54:29 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.724 09:54:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.724 09:54:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.724 09:54:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.724 09:54:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.724 09:54:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.724 09:54:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.724 09:54:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.724 09:54:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.724 09:54:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.724 09:54:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.724 09:54:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.724 09:54:29 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.724 09:54:29 event -- scripts/common.sh@345 -- # : 1 00:04:57.724 09:54:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.724 09:54:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.724 09:54:29 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.724 09:54:29 event -- scripts/common.sh@353 -- # local d=1 00:04:57.724 09:54:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.724 09:54:29 event -- scripts/common.sh@355 -- # echo 1 00:04:57.724 09:54:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.724 09:54:29 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.724 09:54:29 event -- scripts/common.sh@353 -- # local d=2 00:04:57.724 09:54:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.724 09:54:29 event -- scripts/common.sh@355 -- # echo 2 00:04:57.724 09:54:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.724 09:54:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.724 09:54:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.724 09:54:29 event -- scripts/common.sh@368 -- # return 0 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.724 --rc genhtml_branch_coverage=1 00:04:57.724 --rc genhtml_function_coverage=1 00:04:57.724 --rc genhtml_legend=1 00:04:57.724 --rc geninfo_all_blocks=1 00:04:57.724 --rc geninfo_unexecuted_blocks=1 00:04:57.724 00:04:57.724 ' 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.724 --rc genhtml_branch_coverage=1 00:04:57.724 --rc genhtml_function_coverage=1 00:04:57.724 --rc genhtml_legend=1 00:04:57.724 --rc geninfo_all_blocks=1 00:04:57.724 --rc geninfo_unexecuted_blocks=1 00:04:57.724 00:04:57.724 ' 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.724 --rc genhtml_branch_coverage=1 00:04:57.724 --rc genhtml_function_coverage=1 00:04:57.724 --rc genhtml_legend=1 00:04:57.724 --rc geninfo_all_blocks=1 00:04:57.724 --rc geninfo_unexecuted_blocks=1 00:04:57.724 00:04:57.724 ' 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.724 --rc genhtml_branch_coverage=1 00:04:57.724 --rc genhtml_function_coverage=1 00:04:57.724 --rc genhtml_legend=1 00:04:57.724 --rc geninfo_all_blocks=1 00:04:57.724 --rc geninfo_unexecuted_blocks=1 00:04:57.724 00:04:57.724 ' 00:04:57.724 09:54:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.724 09:54:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.724 09:54:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:57.724 09:54:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.724 09:54:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.724 ************************************ 00:04:57.724 START TEST event_perf 00:04:57.724 ************************************ 00:04:57.724 09:54:29 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.724 Running I/O for 1 seconds...[2024-11-04 09:54:29.680882] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:57.724 [2024-11-04 09:54:29.681144] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57949 ] 00:04:57.724 [2024-11-04 09:54:29.836076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.981 [2024-11-04 09:54:29.913173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.981 [2024-11-04 09:54:29.913328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.981 [2024-11-04 09:54:29.913442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.981 [2024-11-04 09:54:29.913446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.914 Running I/O for 1 seconds... 00:04:58.914 lcore 0: 196686 00:04:58.914 lcore 1: 196685 00:04:58.914 lcore 2: 196685 00:04:58.914 lcore 3: 196687 00:04:58.914 done. 00:04:58.914 00:04:58.914 real 0m1.311s 00:04:58.914 user 0m4.121s 00:04:58.914 sys 0m0.067s 00:04:58.914 09:54:30 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.914 ************************************ 00:04:58.914 END TEST event_perf 00:04:58.914 ************************************ 00:04:58.914 09:54:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.914 09:54:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:58.914 09:54:31 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:58.914 09:54:31 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.914 09:54:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.914 ************************************ 00:04:58.914 START TEST event_reactor 00:04:58.914 ************************************ 00:04:58.914 09:54:31 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:58.914 [2024-11-04 09:54:31.045718] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:04:58.914 [2024-11-04 09:54:31.045810] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57982 ] 00:04:59.171 [2024-11-04 09:54:31.196200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.171 [2024-11-04 09:54:31.258208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.554 test_start 00:05:00.554 oneshot 00:05:00.554 tick 100 00:05:00.554 tick 100 00:05:00.554 tick 250 00:05:00.554 tick 100 00:05:00.554 tick 100 00:05:00.554 tick 100 00:05:00.554 tick 250 00:05:00.554 tick 500 00:05:00.554 tick 100 00:05:00.554 tick 100 00:05:00.554 tick 250 00:05:00.554 tick 100 00:05:00.554 tick 100 00:05:00.554 test_end 00:05:00.554 00:05:00.554 real 0m1.291s 00:05:00.554 user 0m1.131s 00:05:00.554 sys 0m0.053s 00:05:00.554 09:54:32 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.554 ************************************ 00:05:00.554 09:54:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.554 END TEST event_reactor 00:05:00.554 ************************************ 00:05:00.554 09:54:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.554 09:54:32 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:00.554 09:54:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.554 09:54:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.554 ************************************ 00:05:00.554 START TEST event_reactor_perf 00:05:00.554 ************************************ 00:05:00.554 09:54:32 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.554 [2024-11-04 09:54:32.394347] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:00.554 [2024-11-04 09:54:32.394780] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:05:00.554 [2024-11-04 09:54:32.545184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.554 [2024-11-04 09:54:32.610420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.928 test_start 00:05:01.928 test_end 00:05:01.928 Performance: 369548 events per second 00:05:01.928 00:05:01.928 real 0m1.288s 00:05:01.928 user 0m1.138s 00:05:01.928 sys 0m0.041s 00:05:01.928 09:54:33 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.928 ************************************ 00:05:01.928 END TEST event_reactor_perf 00:05:01.928 ************************************ 00:05:01.928 09:54:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.928 09:54:33 event -- event/event.sh@49 -- # uname -s 00:05:01.928 09:54:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:01.928 09:54:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:01.928 09:54:33 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.928 09:54:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.928 09:54:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.928 ************************************ 00:05:01.928 START TEST event_scheduler 00:05:01.928 ************************************ 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:01.928 * Looking for test storage... 00:05:01.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.928 09:54:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.928 --rc genhtml_branch_coverage=1 00:05:01.928 --rc genhtml_function_coverage=1 00:05:01.928 --rc genhtml_legend=1 00:05:01.928 --rc geninfo_all_blocks=1 00:05:01.928 --rc geninfo_unexecuted_blocks=1 00:05:01.928 00:05:01.928 ' 00:05:01.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.928 --rc genhtml_branch_coverage=1 00:05:01.928 --rc genhtml_function_coverage=1 00:05:01.928 --rc genhtml_legend=1 00:05:01.928 --rc geninfo_all_blocks=1 00:05:01.928 --rc geninfo_unexecuted_blocks=1 00:05:01.928 00:05:01.928 ' 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.928 --rc genhtml_branch_coverage=1 00:05:01.928 --rc genhtml_function_coverage=1 00:05:01.928 --rc genhtml_legend=1 00:05:01.928 --rc geninfo_all_blocks=1 00:05:01.928 --rc geninfo_unexecuted_blocks=1 00:05:01.928 00:05:01.928 ' 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.928 --rc genhtml_branch_coverage=1 00:05:01.928 --rc genhtml_function_coverage=1 00:05:01.928 --rc genhtml_legend=1 00:05:01.928 --rc geninfo_all_blocks=1 00:05:01.928 --rc geninfo_unexecuted_blocks=1 00:05:01.928 00:05:01.928 ' 00:05:01.928 09:54:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:01.928 09:54:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58087 00:05:01.928 09:54:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:01.928 09:54:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.928 09:54:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58087 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58087 ']' 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:01.928 09:54:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.928 [2024-11-04 09:54:33.954729] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:01.929 [2024-11-04 09:54:33.955106] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58087 ] 00:05:02.187 [2024-11-04 09:54:34.105631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.187 [2024-11-04 09:54:34.183639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.187 [2024-11-04 09:54:34.183789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.187 [2024-11-04 09:54:34.183903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.187 [2024-11-04 09:54:34.183906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:02.187 09:54:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.187 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.187 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.187 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.187 POWER: Cannot set governor of lcore 0 to performance 00:05:02.187 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.187 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.187 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:02.187 POWER: Cannot set governor of lcore 0 to userspace 00:05:02.187 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:02.187 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:02.187 POWER: Unable to set Power Management Environment for lcore 0 00:05:02.187 [2024-11-04 09:54:34.242002] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:02.187 [2024-11-04 09:54:34.242269] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:02.187 [2024-11-04 09:54:34.242760] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:02.187 [2024-11-04 09:54:34.242786] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:02.187 [2024-11-04 09:54:34.242797] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:02.187 [2024-11-04 09:54:34.242806] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.187 09:54:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.187 [2024-11-04 09:54:34.307784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:02.187 [2024-11-04 09:54:34.349113] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.187 09:54:34 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.187 09:54:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 ************************************ 00:05:02.445 START TEST scheduler_create_thread 00:05:02.445 ************************************ 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 2 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 3 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 4 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 5 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 6 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 7 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 8 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 9 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 10 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.445 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:02.446 09:54:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.818 09:54:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.818 09:54:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.818 09:54:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.818 09:54:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.818 09:54:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.191 ************************************ 00:05:05.191 END TEST scheduler_create_thread 00:05:05.191 ************************************ 00:05:05.191 09:54:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.191 00:05:05.191 real 0m2.613s 00:05:05.191 user 0m0.012s 00:05:05.191 sys 0m0.008s 00:05:05.191 09:54:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.191 09:54:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.191 09:54:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.191 09:54:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58087 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58087 ']' 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58087 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58087 00:05:05.191 killing process with pid 58087 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58087' 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58087 00:05:05.191 09:54:37 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58087 00:05:05.449 [2024-11-04 09:54:37.454086] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.708 00:05:05.708 real 0m3.948s 00:05:05.708 user 0m5.751s 00:05:05.708 sys 0m0.364s 00:05:05.708 09:54:37 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.708 ************************************ 00:05:05.708 END TEST event_scheduler 00:05:05.708 ************************************ 00:05:05.708 09:54:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.708 09:54:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.708 09:54:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.708 09:54:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.708 09:54:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.708 09:54:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.708 ************************************ 00:05:05.708 START TEST app_repeat 00:05:05.708 ************************************ 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.708 Process app_repeat pid: 58172 00:05:05.708 spdk_app_start Round 0 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58172 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58172' 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.708 09:54:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58172 /var/tmp/spdk-nbd.sock 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58172 ']' 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.708 09:54:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.708 [2024-11-04 09:54:37.776085] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:05.708 [2024-11-04 09:54:37.776232] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58172 ] 00:05:05.966 [2024-11-04 09:54:37.940871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.966 [2024-11-04 09:54:38.024639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.967 [2024-11-04 09:54:38.024664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.967 [2024-11-04 09:54:38.082985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:06.224 09:54:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.224 09:54:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:06.224 09:54:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.482 Malloc0 00:05:06.482 09:54:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.741 Malloc1 00:05:06.741 09:54:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.741 09:54:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.018 /dev/nbd0 00:05:07.018 09:54:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.018 09:54:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:07.018 09:54:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:07.019 09:54:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.019 1+0 records in 00:05:07.019 1+0 records out 00:05:07.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043383 s, 9.4 MB/s 00:05:07.019 09:54:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.019 09:54:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:07.019 09:54:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.019 09:54:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:07.019 09:54:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:07.019 09:54:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.019 09:54:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.019 09:54:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.276 /dev/nbd1 00:05:07.276 09:54:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.276 09:54:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:07.276 09:54:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.276 1+0 records in 00:05:07.276 1+0 records out 00:05:07.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261111 s, 15.7 MB/s 00:05:07.534 09:54:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.534 09:54:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:07.534 09:54:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.534 09:54:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:07.534 09:54:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:07.534 09:54:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.534 09:54:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.534 09:54:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.534 09:54:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.534 09:54:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.792 { 00:05:07.792 "nbd_device": "/dev/nbd0", 00:05:07.792 "bdev_name": "Malloc0" 00:05:07.792 }, 00:05:07.792 { 00:05:07.792 "nbd_device": "/dev/nbd1", 00:05:07.792 "bdev_name": "Malloc1" 00:05:07.792 } 00:05:07.792 ]' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.792 { 00:05:07.792 "nbd_device": "/dev/nbd0", 00:05:07.792 "bdev_name": "Malloc0" 00:05:07.792 }, 00:05:07.792 { 00:05:07.792 "nbd_device": "/dev/nbd1", 00:05:07.792 "bdev_name": "Malloc1" 00:05:07.792 } 00:05:07.792 ]' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.792 /dev/nbd1' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.792 /dev/nbd1' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.792 256+0 records in 00:05:07.792 256+0 records out 00:05:07.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109074 s, 96.1 MB/s 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.792 256+0 records in 00:05:07.792 256+0 records out 00:05:07.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277286 s, 37.8 MB/s 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.792 256+0 records in 00:05:07.792 256+0 records out 00:05:07.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240827 s, 43.5 MB/s 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.792 09:54:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.793 09:54:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.793 09:54:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.793 09:54:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.793 09:54:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.793 09:54:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.050 09:54:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.308 09:54:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.873 09:54:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.873 09:54:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.131 09:54:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.389 [2024-11-04 09:54:41.319304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.389 [2024-11-04 09:54:41.373423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.389 [2024-11-04 09:54:41.373436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.389 [2024-11-04 09:54:41.427685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.389 [2024-11-04 09:54:41.427772] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.389 [2024-11-04 09:54:41.427787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.670 09:54:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.670 spdk_app_start Round 1 00:05:12.670 09:54:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:12.670 09:54:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58172 /var/tmp/spdk-nbd.sock 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58172 ']' 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.670 09:54:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:12.670 09:54:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.670 Malloc0 00:05:12.670 09:54:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.928 Malloc1 00:05:12.928 09:54:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.187 09:54:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.449 /dev/nbd0 00:05:13.449 09:54:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.449 09:54:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.449 1+0 records in 00:05:13.449 1+0 records out 00:05:13.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280442 s, 14.6 MB/s 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:13.449 09:54:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:13.449 09:54:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.449 09:54:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.449 09:54:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.707 /dev/nbd1 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.707 1+0 records in 00:05:13.707 1+0 records out 00:05:13.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375261 s, 10.9 MB/s 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:13.707 09:54:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.707 09:54:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.965 { 00:05:13.965 "nbd_device": "/dev/nbd0", 00:05:13.965 "bdev_name": "Malloc0" 00:05:13.965 }, 00:05:13.965 { 00:05:13.965 "nbd_device": "/dev/nbd1", 00:05:13.965 "bdev_name": "Malloc1" 00:05:13.965 } 00:05:13.965 ]' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.965 { 00:05:13.965 "nbd_device": "/dev/nbd0", 00:05:13.965 "bdev_name": "Malloc0" 00:05:13.965 }, 00:05:13.965 { 00:05:13.965 "nbd_device": "/dev/nbd1", 00:05:13.965 "bdev_name": "Malloc1" 00:05:13.965 } 00:05:13.965 ]' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.965 /dev/nbd1' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.965 /dev/nbd1' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.965 256+0 records in 00:05:13.965 256+0 records out 00:05:13.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00793278 s, 132 MB/s 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.965 09:54:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.223 256+0 records in 00:05:14.223 256+0 records out 00:05:14.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199255 s, 52.6 MB/s 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.223 256+0 records in 00:05:14.223 256+0 records out 00:05:14.223 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259271 s, 40.4 MB/s 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.223 09:54:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.480 09:54:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.737 09:54:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.737 09:54:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.737 09:54:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.737 09:54:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.737 09:54:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.737 09:54:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.738 09:54:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.738 09:54:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.738 09:54:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.738 09:54:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.738 09:54:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.995 09:54:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.995 09:54:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.561 09:54:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.561 [2024-11-04 09:54:47.636613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.561 [2024-11-04 09:54:47.696514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.561 [2024-11-04 09:54:47.696525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.819 [2024-11-04 09:54:47.750844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.819 [2024-11-04 09:54:47.750952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.819 [2024-11-04 09:54:47.750964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.348 09:54:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.348 spdk_app_start Round 2 00:05:18.348 09:54:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:18.348 09:54:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58172 /var/tmp/spdk-nbd.sock 00:05:18.348 09:54:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58172 ']' 00:05:18.348 09:54:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.348 09:54:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.348 09:54:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.348 09:54:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.348 09:54:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.914 09:54:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.914 09:54:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:18.914 09:54:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.172 Malloc0 00:05:19.172 09:54:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.430 Malloc1 00:05:19.430 09:54:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.430 09:54:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.688 /dev/nbd0 00:05:19.688 09:54:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.688 09:54:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.688 1+0 records in 00:05:19.688 1+0 records out 00:05:19.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268661 s, 15.2 MB/s 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:19.688 09:54:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:19.688 09:54:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.688 09:54:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.688 09:54:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.946 /dev/nbd1 00:05:19.946 09:54:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.946 09:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.946 1+0 records in 00:05:19.946 1+0 records out 00:05:19.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364786 s, 11.2 MB/s 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:19.946 09:54:52 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:19.946 09:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.946 09:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.946 09:54:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.946 09:54:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.946 09:54:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.511 { 00:05:20.511 "nbd_device": "/dev/nbd0", 00:05:20.511 "bdev_name": "Malloc0" 00:05:20.511 }, 00:05:20.511 { 00:05:20.511 "nbd_device": "/dev/nbd1", 00:05:20.511 "bdev_name": "Malloc1" 00:05:20.511 } 00:05:20.511 ]' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.511 { 00:05:20.511 "nbd_device": "/dev/nbd0", 00:05:20.511 "bdev_name": "Malloc0" 00:05:20.511 }, 00:05:20.511 { 00:05:20.511 "nbd_device": "/dev/nbd1", 00:05:20.511 "bdev_name": "Malloc1" 00:05:20.511 } 00:05:20.511 ]' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.511 /dev/nbd1' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.511 /dev/nbd1' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.511 256+0 records in 00:05:20.511 256+0 records out 00:05:20.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00893728 s, 117 MB/s 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.511 256+0 records in 00:05:20.511 256+0 records out 00:05:20.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203508 s, 51.5 MB/s 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.511 256+0 records in 00:05:20.511 256+0 records out 00:05:20.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233461 s, 44.9 MB/s 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.511 09:54:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.770 09:54:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.028 09:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.286 09:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.286 09:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.544 09:54:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.544 09:54:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.802 09:54:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.061 [2024-11-04 09:54:53.999105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.061 [2024-11-04 09:54:54.049995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.061 [2024-11-04 09:54:54.050021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.061 [2024-11-04 09:54:54.104338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.061 [2024-11-04 09:54:54.104465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.061 [2024-11-04 09:54:54.104478] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.347 09:54:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58172 /var/tmp/spdk-nbd.sock 00:05:25.347 09:54:56 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58172 ']' 00:05:25.347 09:54:56 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.347 09:54:56 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.347 09:54:56 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.347 09:54:56 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.347 09:54:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:25.347 09:54:57 event.app_repeat -- event/event.sh@39 -- # killprocess 58172 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58172 ']' 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58172 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58172 00:05:25.347 killing process with pid 58172 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58172' 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58172 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58172 00:05:25.347 spdk_app_start is called in Round 0. 00:05:25.347 Shutdown signal received, stop current app iteration 00:05:25.347 Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 reinitialization... 00:05:25.347 spdk_app_start is called in Round 1. 00:05:25.347 Shutdown signal received, stop current app iteration 00:05:25.347 Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 reinitialization... 00:05:25.347 spdk_app_start is called in Round 2. 00:05:25.347 Shutdown signal received, stop current app iteration 00:05:25.347 Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 reinitialization... 00:05:25.347 spdk_app_start is called in Round 3. 00:05:25.347 Shutdown signal received, stop current app iteration 00:05:25.347 ************************************ 00:05:25.347 END TEST app_repeat 00:05:25.347 ************************************ 00:05:25.347 09:54:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.347 09:54:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:25.347 00:05:25.347 real 0m19.601s 00:05:25.347 user 0m44.937s 00:05:25.347 sys 0m2.997s 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.347 09:54:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.347 09:54:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.347 09:54:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.347 09:54:57 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.347 09:54:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.347 09:54:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.347 ************************************ 00:05:25.347 START TEST cpu_locks 00:05:25.347 ************************************ 00:05:25.347 09:54:57 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.347 * Looking for test storage... 00:05:25.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.347 09:54:57 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.347 09:54:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.347 09:54:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.605 09:54:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.605 09:54:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.605 09:54:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.605 09:54:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.606 09:54:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.606 --rc genhtml_branch_coverage=1 00:05:25.606 --rc genhtml_function_coverage=1 00:05:25.606 --rc genhtml_legend=1 00:05:25.606 --rc geninfo_all_blocks=1 00:05:25.606 --rc geninfo_unexecuted_blocks=1 00:05:25.606 00:05:25.606 ' 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.606 --rc genhtml_branch_coverage=1 00:05:25.606 --rc genhtml_function_coverage=1 00:05:25.606 --rc genhtml_legend=1 00:05:25.606 --rc geninfo_all_blocks=1 00:05:25.606 --rc geninfo_unexecuted_blocks=1 00:05:25.606 00:05:25.606 ' 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.606 --rc genhtml_branch_coverage=1 00:05:25.606 --rc genhtml_function_coverage=1 00:05:25.606 --rc genhtml_legend=1 00:05:25.606 --rc geninfo_all_blocks=1 00:05:25.606 --rc geninfo_unexecuted_blocks=1 00:05:25.606 00:05:25.606 ' 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.606 --rc genhtml_branch_coverage=1 00:05:25.606 --rc genhtml_function_coverage=1 00:05:25.606 --rc genhtml_legend=1 00:05:25.606 --rc geninfo_all_blocks=1 00:05:25.606 --rc geninfo_unexecuted_blocks=1 00:05:25.606 00:05:25.606 ' 00:05:25.606 09:54:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.606 09:54:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.606 09:54:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.606 09:54:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.606 09:54:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.606 ************************************ 00:05:25.606 START TEST default_locks 00:05:25.606 ************************************ 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58623 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58623 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58623 ']' 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.606 09:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.606 [2024-11-04 09:54:57.628389] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:25.606 [2024-11-04 09:54:57.628491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58623 ] 00:05:25.606 [2024-11-04 09:54:57.767715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.864 [2024-11-04 09:54:57.829552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.864 [2024-11-04 09:54:57.904166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.798 09:54:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.798 09:54:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:26.798 09:54:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58623 00:05:26.798 09:54:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.798 09:54:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58623 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58623 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58623 ']' 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58623 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58623 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.057 killing process with pid 58623 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58623' 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58623 00:05:27.057 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58623 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58623 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58623 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58623 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58623 ']' 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58623) - No such process 00:05:27.315 ERROR: process (pid: 58623) is no longer running 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.315 00:05:27.315 real 0m1.900s 00:05:27.315 user 0m2.092s 00:05:27.315 sys 0m0.547s 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.315 09:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.315 ************************************ 00:05:27.315 END TEST default_locks 00:05:27.315 ************************************ 00:05:27.573 09:54:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.573 09:54:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.573 09:54:59 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.573 09:54:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.573 ************************************ 00:05:27.573 START TEST default_locks_via_rpc 00:05:27.573 ************************************ 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58670 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58670 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58670 ']' 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.573 09:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.573 [2024-11-04 09:54:59.578084] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:27.573 [2024-11-04 09:54:59.578171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58670 ] 00:05:27.573 [2024-11-04 09:54:59.719735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.831 [2024-11-04 09:54:59.780166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.831 [2024-11-04 09:54:59.850156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.089 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58670 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58670 00:05:28.090 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58670 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58670 ']' 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58670 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58670 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.397 killing process with pid 58670 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58670' 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58670 00:05:28.397 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58670 00:05:28.962 00:05:28.962 real 0m1.380s 00:05:28.962 user 0m1.335s 00:05:28.962 sys 0m0.532s 00:05:28.962 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.962 ************************************ 00:05:28.962 END TEST default_locks_via_rpc 00:05:28.962 ************************************ 00:05:28.962 09:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.962 09:55:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:28.962 09:55:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.962 09:55:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.962 09:55:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.962 ************************************ 00:05:28.962 START TEST non_locking_app_on_locked_coremask 00:05:28.962 ************************************ 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58713 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58713 /var/tmp/spdk.sock 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58713 ']' 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.962 09:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.962 [2024-11-04 09:55:01.023912] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:28.962 [2024-11-04 09:55:01.024043] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58713 ] 00:05:29.221 [2024-11-04 09:55:01.165990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.221 [2024-11-04 09:55:01.230968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.221 [2024-11-04 09:55:01.299313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58729 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58729 /var/tmp/spdk2.sock 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58729 ']' 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.155 09:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.155 [2024-11-04 09:55:02.097486] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:30.155 [2024-11-04 09:55:02.097630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58729 ] 00:05:30.155 [2024-11-04 09:55:02.259688] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.155 [2024-11-04 09:55:02.259755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.414 [2024-11-04 09:55:02.379472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.414 [2024-11-04 09:55:02.534652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.980 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.980 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:30.980 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58713 00:05:30.980 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58713 00:05:30.980 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58713 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58713 ']' 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58713 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58713 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.918 killing process with pid 58713 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58713' 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58713 00:05:31.918 09:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58713 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58729 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58729 ']' 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58729 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58729 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.484 killing process with pid 58729 00:05:32.484 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.485 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58729' 00:05:32.743 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58729 00:05:32.743 09:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58729 00:05:33.001 00:05:33.001 real 0m4.070s 00:05:33.001 user 0m4.565s 00:05:33.001 sys 0m1.121s 00:05:33.001 09:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.001 09:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.001 ************************************ 00:05:33.001 END TEST non_locking_app_on_locked_coremask 00:05:33.001 ************************************ 00:05:33.001 09:55:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.001 09:55:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.001 09:55:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.001 09:55:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.001 ************************************ 00:05:33.001 START TEST locking_app_on_unlocked_coremask 00:05:33.001 ************************************ 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58796 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58796 /var/tmp/spdk.sock 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58796 ']' 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.001 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.001 [2024-11-04 09:55:05.150229] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:33.001 [2024-11-04 09:55:05.150376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58796 ] 00:05:33.259 [2024-11-04 09:55:05.298904] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.260 [2024-11-04 09:55:05.298961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.260 [2024-11-04 09:55:05.359338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.517 [2024-11-04 09:55:05.432296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58805 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58805 /var/tmp/spdk2.sock 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58805 ']' 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.517 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.518 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.518 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.518 09:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.775 [2024-11-04 09:55:05.694866] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:33.775 [2024-11-04 09:55:05.694975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58805 ] 00:05:33.775 [2024-11-04 09:55:05.854827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.033 [2024-11-04 09:55:05.982625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.033 [2024-11-04 09:55:06.124000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.598 09:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.598 09:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:34.598 09:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58805 00:05:34.598 09:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58805 00:05:34.598 09:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58796 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58796 ']' 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58796 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58796 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:35.593 killing process with pid 58796 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58796' 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58796 00:05:35.593 09:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58796 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58805 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58805 ']' 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58805 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58805 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:36.158 killing process with pid 58805 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58805' 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58805 00:05:36.158 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58805 00:05:36.724 00:05:36.724 real 0m3.526s 00:05:36.724 user 0m3.857s 00:05:36.724 sys 0m1.041s 00:05:36.724 ************************************ 00:05:36.724 END TEST locking_app_on_unlocked_coremask 00:05:36.724 ************************************ 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.724 09:55:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:36.724 09:55:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.724 09:55:08 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.724 09:55:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.724 ************************************ 00:05:36.724 START TEST locking_app_on_locked_coremask 00:05:36.724 ************************************ 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58872 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58872 /var/tmp/spdk.sock 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58872 ']' 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.724 09:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.724 [2024-11-04 09:55:08.724505] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:36.724 [2024-11-04 09:55:08.724639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58872 ] 00:05:36.724 [2024-11-04 09:55:08.873118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.982 [2024-11-04 09:55:08.933954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.982 [2024-11-04 09:55:09.005020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58888 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58888 /var/tmp/spdk2.sock 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58888 /var/tmp/spdk2.sock 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58888 /var/tmp/spdk2.sock 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58888 ']' 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.915 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.916 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.916 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.916 09:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.916 [2024-11-04 09:55:09.812505] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:37.916 [2024-11-04 09:55:09.812646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:05:37.916 [2024-11-04 09:55:09.978050] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58872 has claimed it. 00:05:37.916 [2024-11-04 09:55:09.978134] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.481 ERROR: process (pid: 58888) is no longer running 00:05:38.482 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58888) - No such process 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58872 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58872 00:05:38.482 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.048 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58872 00:05:39.048 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58872 ']' 00:05:39.048 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58872 00:05:39.048 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:39.048 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.048 09:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58872 00:05:39.048 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.048 killing process with pid 58872 00:05:39.048 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.048 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58872' 00:05:39.048 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58872 00:05:39.048 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58872 00:05:39.305 00:05:39.305 real 0m2.734s 00:05:39.305 user 0m3.253s 00:05:39.305 sys 0m0.636s 00:05:39.305 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.305 09:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.305 ************************************ 00:05:39.305 END TEST locking_app_on_locked_coremask 00:05:39.305 ************************************ 00:05:39.305 09:55:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:39.305 09:55:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.305 09:55:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.305 09:55:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.305 ************************************ 00:05:39.305 START TEST locking_overlapped_coremask 00:05:39.305 ************************************ 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58933 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58933 /var/tmp/spdk.sock 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58933 ']' 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.305 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.563 [2024-11-04 09:55:11.498705] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:39.563 [2024-11-04 09:55:11.498805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:05:39.563 [2024-11-04 09:55:11.640667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.563 [2024-11-04 09:55:11.698103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.563 [2024-11-04 09:55:11.698230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.563 [2024-11-04 09:55:11.698235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.821 [2024-11-04 09:55:11.765871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58944 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58944 /var/tmp/spdk2.sock 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58944 /var/tmp/spdk2.sock 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58944 /var/tmp/spdk2.sock 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58944 ']' 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.821 09:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.080 [2024-11-04 09:55:12.022902] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:40.080 [2024-11-04 09:55:12.023489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:05:40.080 [2024-11-04 09:55:12.190481] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58933 has claimed it. 00:05:40.080 [2024-11-04 09:55:12.190548] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.716 ERROR: process (pid: 58944) is no longer running 00:05:40.716 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58944) - No such process 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58933 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58933 ']' 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58933 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58933 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.716 killing process with pid 58933 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58933' 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58933 00:05:40.716 09:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58933 00:05:41.281 00:05:41.281 real 0m1.712s 00:05:41.281 user 0m4.665s 00:05:41.281 sys 0m0.394s 00:05:41.281 09:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.281 09:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.281 ************************************ 00:05:41.282 END TEST locking_overlapped_coremask 00:05:41.282 ************************************ 00:05:41.282 09:55:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:41.282 09:55:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:41.282 09:55:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.282 09:55:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.282 ************************************ 00:05:41.282 START TEST locking_overlapped_coremask_via_rpc 00:05:41.282 ************************************ 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58990 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58990 /var/tmp/spdk.sock 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:41.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58990 ']' 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:41.282 09:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.282 [2024-11-04 09:55:13.270229] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:41.282 [2024-11-04 09:55:13.270338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58990 ] 00:05:41.282 [2024-11-04 09:55:13.418154] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.282 [2024-11-04 09:55:13.418215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.540 [2024-11-04 09:55:13.482206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.540 [2024-11-04 09:55:13.482314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.540 [2024-11-04 09:55:13.482322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.540 [2024-11-04 09:55:13.551394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59008 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59008 /var/tmp/spdk2.sock 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59008 ']' 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.106 09:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 [2024-11-04 09:55:14.315407] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:42.363 [2024-11-04 09:55:14.315944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59008 ] 00:05:42.363 [2024-11-04 09:55:14.475887] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.363 [2024-11-04 09:55:14.475956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.621 [2024-11-04 09:55:14.595511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.621 [2024-11-04 09:55:14.598738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.621 [2024-11-04 09:55:14.598738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:42.621 [2024-11-04 09:55:14.743243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.186 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.186 [2024-11-04 09:55:15.352773] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58990 has claimed it. 00:05:43.444 request: 00:05:43.444 { 00:05:43.444 "method": "framework_enable_cpumask_locks", 00:05:43.444 "req_id": 1 00:05:43.444 } 00:05:43.444 Got JSON-RPC error response 00:05:43.444 response: 00:05:43.444 { 00:05:43.444 "code": -32603, 00:05:43.444 "message": "Failed to claim CPU core: 2" 00:05:43.444 } 00:05:43.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58990 /var/tmp/spdk.sock 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58990 ']' 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.444 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59008 /var/tmp/spdk2.sock 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59008 ']' 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.722 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.998 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.998 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:43.998 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:43.998 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.998 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.999 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.999 00:05:43.999 real 0m2.704s 00:05:43.999 user 0m1.397s 00:05:43.999 sys 0m0.232s 00:05:43.999 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.999 09:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.999 ************************************ 00:05:43.999 END TEST locking_overlapped_coremask_via_rpc 00:05:43.999 ************************************ 00:05:43.999 09:55:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:43.999 09:55:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58990 ]] 00:05:43.999 09:55:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58990 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58990 ']' 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58990 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58990 00:05:43.999 killing process with pid 58990 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58990' 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58990 00:05:43.999 09:55:15 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58990 00:05:44.257 09:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59008 ]] 00:05:44.257 09:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59008 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59008 ']' 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59008 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59008 00:05:44.257 killing process with pid 59008 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59008' 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59008 00:05:44.257 09:55:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59008 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58990 ]] 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58990 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58990 ']' 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58990 00:05:44.822 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58990) - No such process 00:05:44.822 Process with pid 58990 is not found 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58990 is not found' 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59008 ]] 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59008 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59008 ']' 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59008 00:05:44.822 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59008) - No such process 00:05:44.822 Process with pid 59008 is not found 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59008 is not found' 00:05:44.822 09:55:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:44.822 00:05:44.822 real 0m19.393s 00:05:44.822 user 0m34.138s 00:05:44.822 sys 0m5.396s 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.822 09:55:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.822 ************************************ 00:05:44.822 END TEST cpu_locks 00:05:44.822 ************************************ 00:05:44.822 00:05:44.822 real 0m47.369s 00:05:44.822 user 1m31.445s 00:05:44.822 sys 0m9.208s 00:05:44.822 09:55:16 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.822 09:55:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.822 ************************************ 00:05:44.822 END TEST event 00:05:44.822 ************************************ 00:05:44.822 09:55:16 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.822 09:55:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.822 09:55:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.822 09:55:16 -- common/autotest_common.sh@10 -- # set +x 00:05:44.822 ************************************ 00:05:44.822 START TEST thread 00:05:44.822 ************************************ 00:05:44.822 09:55:16 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:44.822 * Looking for test storage... 00:05:44.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:44.822 09:55:16 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.822 09:55:16 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.822 09:55:16 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.081 09:55:17 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.081 09:55:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.081 09:55:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.081 09:55:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.081 09:55:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.081 09:55:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.081 09:55:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.081 09:55:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.081 09:55:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.081 09:55:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.081 09:55:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.081 09:55:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.081 09:55:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:45.081 09:55:17 thread -- scripts/common.sh@345 -- # : 1 00:05:45.081 09:55:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.081 09:55:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.081 09:55:17 thread -- scripts/common.sh@365 -- # decimal 1 00:05:45.081 09:55:17 thread -- scripts/common.sh@353 -- # local d=1 00:05:45.081 09:55:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.081 09:55:17 thread -- scripts/common.sh@355 -- # echo 1 00:05:45.081 09:55:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.081 09:55:17 thread -- scripts/common.sh@366 -- # decimal 2 00:05:45.082 09:55:17 thread -- scripts/common.sh@353 -- # local d=2 00:05:45.082 09:55:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.082 09:55:17 thread -- scripts/common.sh@355 -- # echo 2 00:05:45.082 09:55:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.082 09:55:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.082 09:55:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.082 09:55:17 thread -- scripts/common.sh@368 -- # return 0 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.082 --rc genhtml_branch_coverage=1 00:05:45.082 --rc genhtml_function_coverage=1 00:05:45.082 --rc genhtml_legend=1 00:05:45.082 --rc geninfo_all_blocks=1 00:05:45.082 --rc geninfo_unexecuted_blocks=1 00:05:45.082 00:05:45.082 ' 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.082 --rc genhtml_branch_coverage=1 00:05:45.082 --rc genhtml_function_coverage=1 00:05:45.082 --rc genhtml_legend=1 00:05:45.082 --rc geninfo_all_blocks=1 00:05:45.082 --rc geninfo_unexecuted_blocks=1 00:05:45.082 00:05:45.082 ' 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.082 --rc genhtml_branch_coverage=1 00:05:45.082 --rc genhtml_function_coverage=1 00:05:45.082 --rc genhtml_legend=1 00:05:45.082 --rc geninfo_all_blocks=1 00:05:45.082 --rc geninfo_unexecuted_blocks=1 00:05:45.082 00:05:45.082 ' 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.082 --rc genhtml_branch_coverage=1 00:05:45.082 --rc genhtml_function_coverage=1 00:05:45.082 --rc genhtml_legend=1 00:05:45.082 --rc geninfo_all_blocks=1 00:05:45.082 --rc geninfo_unexecuted_blocks=1 00:05:45.082 00:05:45.082 ' 00:05:45.082 09:55:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.082 09:55:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.082 ************************************ 00:05:45.082 START TEST thread_poller_perf 00:05:45.082 ************************************ 00:05:45.082 09:55:17 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:45.082 [2024-11-04 09:55:17.059935] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:45.082 [2024-11-04 09:55:17.060601] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59139 ] 00:05:45.082 [2024-11-04 09:55:17.203165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.340 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:45.340 [2024-11-04 09:55:17.266184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.272 [2024-11-04T09:55:18.442Z] ====================================== 00:05:46.272 [2024-11-04T09:55:18.442Z] busy:2205973793 (cyc) 00:05:46.272 [2024-11-04T09:55:18.442Z] total_run_count: 333000 00:05:46.272 [2024-11-04T09:55:18.442Z] tsc_hz: 2200000000 (cyc) 00:05:46.272 [2024-11-04T09:55:18.442Z] ====================================== 00:05:46.272 [2024-11-04T09:55:18.442Z] poller_cost: 6624 (cyc), 3010 (nsec) 00:05:46.272 00:05:46.272 real 0m1.281s 00:05:46.272 user 0m1.133s 00:05:46.272 sys 0m0.042s 00:05:46.272 09:55:18 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.272 09:55:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.272 ************************************ 00:05:46.272 END TEST thread_poller_perf 00:05:46.272 ************************************ 00:05:46.272 09:55:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:46.272 09:55:18 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:46.272 09:55:18 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.272 09:55:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.272 ************************************ 00:05:46.272 START TEST thread_poller_perf 00:05:46.272 ************************************ 00:05:46.272 09:55:18 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:46.272 [2024-11-04 09:55:18.391009] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:46.272 [2024-11-04 09:55:18.391120] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59174 ] 00:05:46.530 [2024-11-04 09:55:18.537502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.530 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:46.530 [2024-11-04 09:55:18.598212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.902 [2024-11-04T09:55:20.072Z] ====================================== 00:05:47.902 [2024-11-04T09:55:20.072Z] busy:2202533256 (cyc) 00:05:47.902 [2024-11-04T09:55:20.072Z] total_run_count: 4524000 00:05:47.902 [2024-11-04T09:55:20.072Z] tsc_hz: 2200000000 (cyc) 00:05:47.902 [2024-11-04T09:55:20.072Z] ====================================== 00:05:47.902 [2024-11-04T09:55:20.072Z] poller_cost: 486 (cyc), 220 (nsec) 00:05:47.902 00:05:47.902 real 0m1.276s 00:05:47.902 user 0m1.127s 00:05:47.902 sys 0m0.042s 00:05:47.902 09:55:19 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.902 09:55:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.902 ************************************ 00:05:47.902 END TEST thread_poller_perf 00:05:47.902 ************************************ 00:05:47.902 09:55:19 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:47.902 00:05:47.902 real 0m2.833s 00:05:47.902 user 0m2.410s 00:05:47.902 sys 0m0.207s 00:05:47.902 09:55:19 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.902 ************************************ 00:05:47.902 END TEST thread 00:05:47.902 ************************************ 00:05:47.902 09:55:19 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.902 09:55:19 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:47.902 09:55:19 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:47.902 09:55:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.902 09:55:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.902 09:55:19 -- common/autotest_common.sh@10 -- # set +x 00:05:47.902 ************************************ 00:05:47.902 START TEST app_cmdline 00:05:47.902 ************************************ 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:47.902 * Looking for test storage... 00:05:47.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.902 09:55:19 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.902 --rc genhtml_branch_coverage=1 00:05:47.902 --rc genhtml_function_coverage=1 00:05:47.902 --rc genhtml_legend=1 00:05:47.902 --rc geninfo_all_blocks=1 00:05:47.902 --rc geninfo_unexecuted_blocks=1 00:05:47.902 00:05:47.902 ' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.902 --rc genhtml_branch_coverage=1 00:05:47.902 --rc genhtml_function_coverage=1 00:05:47.902 --rc genhtml_legend=1 00:05:47.902 --rc geninfo_all_blocks=1 00:05:47.902 --rc geninfo_unexecuted_blocks=1 00:05:47.902 00:05:47.902 ' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.902 --rc genhtml_branch_coverage=1 00:05:47.902 --rc genhtml_function_coverage=1 00:05:47.902 --rc genhtml_legend=1 00:05:47.902 --rc geninfo_all_blocks=1 00:05:47.902 --rc geninfo_unexecuted_blocks=1 00:05:47.902 00:05:47.902 ' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.902 --rc genhtml_branch_coverage=1 00:05:47.902 --rc genhtml_function_coverage=1 00:05:47.902 --rc genhtml_legend=1 00:05:47.902 --rc geninfo_all_blocks=1 00:05:47.902 --rc geninfo_unexecuted_blocks=1 00:05:47.902 00:05:47.902 ' 00:05:47.902 09:55:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:47.902 09:55:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59257 00:05:47.902 09:55:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59257 00:05:47.902 09:55:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59257 ']' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.902 09:55:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:47.902 [2024-11-04 09:55:19.987692] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:47.902 [2024-11-04 09:55:19.988339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:05:48.188 [2024-11-04 09:55:20.138607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.188 [2024-11-04 09:55:20.203075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.188 [2024-11-04 09:55:20.273051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.446 09:55:20 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.446 09:55:20 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:48.446 09:55:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:48.703 { 00:05:48.703 "version": "SPDK v25.01-pre git sha1 fcc19e276", 00:05:48.703 "fields": { 00:05:48.703 "major": 25, 00:05:48.703 "minor": 1, 00:05:48.703 "patch": 0, 00:05:48.703 "suffix": "-pre", 00:05:48.703 "commit": "fcc19e276" 00:05:48.703 } 00:05:48.703 } 00:05:48.703 09:55:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:48.703 09:55:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:48.703 09:55:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:48.703 09:55:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:48.703 09:55:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:48.704 09:55:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.704 09:55:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.704 09:55:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:48.704 09:55:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:48.704 09:55:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:48.704 09:55:20 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:48.961 request: 00:05:48.961 { 00:05:48.961 "method": "env_dpdk_get_mem_stats", 00:05:48.961 "req_id": 1 00:05:48.961 } 00:05:48.961 Got JSON-RPC error response 00:05:48.961 response: 00:05:48.961 { 00:05:48.961 "code": -32601, 00:05:48.961 "message": "Method not found" 00:05:48.961 } 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.961 09:55:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59257 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59257 ']' 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59257 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59257 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.961 killing process with pid 59257 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59257' 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@971 -- # kill 59257 00:05:48.961 09:55:21 app_cmdline -- common/autotest_common.sh@976 -- # wait 59257 00:05:49.526 00:05:49.526 real 0m1.756s 00:05:49.526 user 0m2.116s 00:05:49.526 sys 0m0.466s 00:05:49.526 09:55:21 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.526 ************************************ 00:05:49.526 END TEST app_cmdline 00:05:49.526 ************************************ 00:05:49.526 09:55:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:49.526 09:55:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:49.526 09:55:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.526 09:55:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.526 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.526 ************************************ 00:05:49.526 START TEST version 00:05:49.526 ************************************ 00:05:49.526 09:55:21 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:49.526 * Looking for test storage... 00:05:49.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:49.526 09:55:21 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.526 09:55:21 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.526 09:55:21 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.784 09:55:21 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.784 09:55:21 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.784 09:55:21 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.784 09:55:21 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.784 09:55:21 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.784 09:55:21 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.784 09:55:21 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.784 09:55:21 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.784 09:55:21 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.784 09:55:21 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.784 09:55:21 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.784 09:55:21 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.784 09:55:21 version -- scripts/common.sh@344 -- # case "$op" in 00:05:49.784 09:55:21 version -- scripts/common.sh@345 -- # : 1 00:05:49.784 09:55:21 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.784 09:55:21 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.784 09:55:21 version -- scripts/common.sh@365 -- # decimal 1 00:05:49.784 09:55:21 version -- scripts/common.sh@353 -- # local d=1 00:05:49.784 09:55:21 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.784 09:55:21 version -- scripts/common.sh@355 -- # echo 1 00:05:49.784 09:55:21 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.784 09:55:21 version -- scripts/common.sh@366 -- # decimal 2 00:05:49.784 09:55:21 version -- scripts/common.sh@353 -- # local d=2 00:05:49.784 09:55:21 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.784 09:55:21 version -- scripts/common.sh@355 -- # echo 2 00:05:49.784 09:55:21 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.784 09:55:21 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.784 09:55:21 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.784 09:55:21 version -- scripts/common.sh@368 -- # return 0 00:05:49.784 09:55:21 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.784 09:55:21 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.784 --rc genhtml_branch_coverage=1 00:05:49.784 --rc genhtml_function_coverage=1 00:05:49.784 --rc genhtml_legend=1 00:05:49.784 --rc geninfo_all_blocks=1 00:05:49.784 --rc geninfo_unexecuted_blocks=1 00:05:49.784 00:05:49.784 ' 00:05:49.784 09:55:21 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.784 --rc genhtml_branch_coverage=1 00:05:49.784 --rc genhtml_function_coverage=1 00:05:49.784 --rc genhtml_legend=1 00:05:49.784 --rc geninfo_all_blocks=1 00:05:49.784 --rc geninfo_unexecuted_blocks=1 00:05:49.784 00:05:49.784 ' 00:05:49.784 09:55:21 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.784 --rc genhtml_branch_coverage=1 00:05:49.784 --rc genhtml_function_coverage=1 00:05:49.784 --rc genhtml_legend=1 00:05:49.784 --rc geninfo_all_blocks=1 00:05:49.784 --rc geninfo_unexecuted_blocks=1 00:05:49.784 00:05:49.784 ' 00:05:49.784 09:55:21 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.784 --rc genhtml_branch_coverage=1 00:05:49.784 --rc genhtml_function_coverage=1 00:05:49.784 --rc genhtml_legend=1 00:05:49.784 --rc geninfo_all_blocks=1 00:05:49.784 --rc geninfo_unexecuted_blocks=1 00:05:49.784 00:05:49.784 ' 00:05:49.784 09:55:21 version -- app/version.sh@17 -- # get_header_version major 00:05:49.784 09:55:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.784 09:55:21 version -- app/version.sh@14 -- # cut -f2 00:05:49.784 09:55:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.784 09:55:21 version -- app/version.sh@17 -- # major=25 00:05:49.785 09:55:21 version -- app/version.sh@18 -- # get_header_version minor 00:05:49.785 09:55:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.785 09:55:21 version -- app/version.sh@14 -- # cut -f2 00:05:49.785 09:55:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.785 09:55:21 version -- app/version.sh@18 -- # minor=1 00:05:49.785 09:55:21 version -- app/version.sh@19 -- # get_header_version patch 00:05:49.785 09:55:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.785 09:55:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.785 09:55:21 version -- app/version.sh@14 -- # cut -f2 00:05:49.785 09:55:21 version -- app/version.sh@19 -- # patch=0 00:05:49.785 09:55:21 version -- app/version.sh@20 -- # get_header_version suffix 00:05:49.785 09:55:21 version -- app/version.sh@14 -- # cut -f2 00:05:49.785 09:55:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:49.785 09:55:21 version -- app/version.sh@14 -- # tr -d '"' 00:05:49.785 09:55:21 version -- app/version.sh@20 -- # suffix=-pre 00:05:49.785 09:55:21 version -- app/version.sh@22 -- # version=25.1 00:05:49.785 09:55:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:49.785 09:55:21 version -- app/version.sh@28 -- # version=25.1rc0 00:05:49.785 09:55:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:49.785 09:55:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:49.785 09:55:21 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:49.785 09:55:21 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:49.785 00:05:49.785 real 0m0.262s 00:05:49.785 user 0m0.163s 00:05:49.785 sys 0m0.131s 00:05:49.785 09:55:21 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:49.785 09:55:21 version -- common/autotest_common.sh@10 -- # set +x 00:05:49.785 ************************************ 00:05:49.785 END TEST version 00:05:49.785 ************************************ 00:05:49.785 09:55:21 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:49.785 09:55:21 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:49.785 09:55:21 -- spdk/autotest.sh@194 -- # uname -s 00:05:49.785 09:55:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:49.785 09:55:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:49.785 09:55:21 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:49.785 09:55:21 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:49.785 09:55:21 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:49.785 09:55:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:49.785 09:55:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:49.785 09:55:21 -- common/autotest_common.sh@10 -- # set +x 00:05:49.785 ************************************ 00:05:49.785 START TEST spdk_dd 00:05:49.785 ************************************ 00:05:49.785 09:55:21 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:49.785 * Looking for test storage... 00:05:49.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:49.785 09:55:21 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.785 09:55:21 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.785 09:55:21 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.043 09:55:22 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.043 09:55:22 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:50.043 09:55:22 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.043 09:55:22 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.043 --rc genhtml_branch_coverage=1 00:05:50.043 --rc genhtml_function_coverage=1 00:05:50.043 --rc genhtml_legend=1 00:05:50.043 --rc geninfo_all_blocks=1 00:05:50.043 --rc geninfo_unexecuted_blocks=1 00:05:50.043 00:05:50.043 ' 00:05:50.043 09:55:22 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.043 --rc genhtml_branch_coverage=1 00:05:50.043 --rc genhtml_function_coverage=1 00:05:50.043 --rc genhtml_legend=1 00:05:50.043 --rc geninfo_all_blocks=1 00:05:50.043 --rc geninfo_unexecuted_blocks=1 00:05:50.043 00:05:50.043 ' 00:05:50.043 09:55:22 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.043 --rc genhtml_branch_coverage=1 00:05:50.043 --rc genhtml_function_coverage=1 00:05:50.043 --rc genhtml_legend=1 00:05:50.043 --rc geninfo_all_blocks=1 00:05:50.043 --rc geninfo_unexecuted_blocks=1 00:05:50.043 00:05:50.043 ' 00:05:50.043 09:55:22 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.044 --rc genhtml_branch_coverage=1 00:05:50.044 --rc genhtml_function_coverage=1 00:05:50.044 --rc genhtml_legend=1 00:05:50.044 --rc geninfo_all_blocks=1 00:05:50.044 --rc geninfo_unexecuted_blocks=1 00:05:50.044 00:05:50.044 ' 00:05:50.044 09:55:22 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.044 09:55:22 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.044 09:55:22 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.044 09:55:22 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.044 09:55:22 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.044 09:55:22 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.044 09:55:22 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.044 09:55:22 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.044 09:55:22 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:50.044 09:55:22 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.044 09:55:22 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.302 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.302 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:50.561 09:55:22 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:50.561 09:55:22 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:50.561 09:55:22 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:50.561 09:55:22 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:50.561 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:50.562 * spdk_dd linked to liburing 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:50.562 09:55:22 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:50.562 09:55:22 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:50.563 09:55:22 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:50.563 09:55:22 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:50.563 09:55:22 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:50.563 09:55:22 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:50.563 09:55:22 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:50.563 09:55:22 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:50.563 09:55:22 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:50.563 09:55:22 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:50.563 09:55:22 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.563 09:55:22 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:50.563 ************************************ 00:05:50.563 START TEST spdk_dd_basic_rw 00:05:50.563 ************************************ 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:50.563 * Looking for test storage... 00:05:50.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:50.563 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.823 --rc genhtml_branch_coverage=1 00:05:50.823 --rc genhtml_function_coverage=1 00:05:50.823 --rc genhtml_legend=1 00:05:50.823 --rc geninfo_all_blocks=1 00:05:50.823 --rc geninfo_unexecuted_blocks=1 00:05:50.823 00:05:50.823 ' 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.823 --rc genhtml_branch_coverage=1 00:05:50.823 --rc genhtml_function_coverage=1 00:05:50.823 --rc genhtml_legend=1 00:05:50.823 --rc geninfo_all_blocks=1 00:05:50.823 --rc geninfo_unexecuted_blocks=1 00:05:50.823 00:05:50.823 ' 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.823 --rc genhtml_branch_coverage=1 00:05:50.823 --rc genhtml_function_coverage=1 00:05:50.823 --rc genhtml_legend=1 00:05:50.823 --rc geninfo_all_blocks=1 00:05:50.823 --rc geninfo_unexecuted_blocks=1 00:05:50.823 00:05:50.823 ' 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.823 --rc genhtml_branch_coverage=1 00:05:50.823 --rc genhtml_function_coverage=1 00:05:50.823 --rc genhtml_legend=1 00:05:50.823 --rc geninfo_all_blocks=1 00:05:50.823 --rc geninfo_unexecuted_blocks=1 00:05:50.823 00:05:50.823 ' 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.823 09:55:22 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:50.824 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:50.825 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:50.825 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:50.825 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:50.825 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:50.825 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:50.825 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.826 ************************************ 00:05:50.826 START TEST dd_bs_lt_native_bs 00:05:50.826 ************************************ 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:50.826 09:55:22 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:51.083 { 00:05:51.083 "subsystems": [ 00:05:51.083 { 00:05:51.083 "subsystem": "bdev", 00:05:51.083 "config": [ 00:05:51.083 { 00:05:51.083 "params": { 00:05:51.083 "trtype": "pcie", 00:05:51.083 "traddr": "0000:00:10.0", 00:05:51.083 "name": "Nvme0" 00:05:51.083 }, 00:05:51.083 "method": "bdev_nvme_attach_controller" 00:05:51.083 }, 00:05:51.083 { 00:05:51.083 "method": "bdev_wait_for_examine" 00:05:51.083 } 00:05:51.083 ] 00:05:51.083 } 00:05:51.083 ] 00:05:51.083 } 00:05:51.083 [2024-11-04 09:55:23.019469] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:51.083 [2024-11-04 09:55:23.019612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:05:51.083 [2024-11-04 09:55:23.171007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.083 [2024-11-04 09:55:23.235483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.341 [2024-11-04 09:55:23.294290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.341 [2024-11-04 09:55:23.408258] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:51.341 [2024-11-04 09:55:23.408345] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:51.602 [2024-11-04 09:55:23.535586] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:51.602 00:05:51.602 real 0m0.641s 00:05:51.602 user 0m0.430s 00:05:51.602 sys 0m0.161s 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:51.602 ************************************ 00:05:51.602 END TEST dd_bs_lt_native_bs 00:05:51.602 ************************************ 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.602 ************************************ 00:05:51.602 START TEST dd_rw 00:05:51.602 ************************************ 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:51.602 09:55:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.188 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:52.188 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:52.188 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.188 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.188 [2024-11-04 09:55:24.326871] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:52.188 [2024-11-04 09:55:24.327002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59632 ] 00:05:52.188 { 00:05:52.188 "subsystems": [ 00:05:52.188 { 00:05:52.188 "subsystem": "bdev", 00:05:52.188 "config": [ 00:05:52.188 { 00:05:52.188 "params": { 00:05:52.188 "trtype": "pcie", 00:05:52.188 "traddr": "0000:00:10.0", 00:05:52.188 "name": "Nvme0" 00:05:52.188 }, 00:05:52.188 "method": "bdev_nvme_attach_controller" 00:05:52.188 }, 00:05:52.188 { 00:05:52.188 "method": "bdev_wait_for_examine" 00:05:52.188 } 00:05:52.188 ] 00:05:52.188 } 00:05:52.188 ] 00:05:52.188 } 00:05:52.446 [2024-11-04 09:55:24.474026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.446 [2024-11-04 09:55:24.529238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.446 [2024-11-04 09:55:24.583766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.704  [2024-11-04T09:55:25.131Z] Copying: 60/60 [kB] (average 29 MBps) 00:05:52.961 00:05:52.961 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:52.961 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:52.961 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.961 09:55:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.961 [2024-11-04 09:55:24.939693] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:52.961 [2024-11-04 09:55:24.939788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59651 ] 00:05:52.961 { 00:05:52.961 "subsystems": [ 00:05:52.961 { 00:05:52.961 "subsystem": "bdev", 00:05:52.961 "config": [ 00:05:52.961 { 00:05:52.961 "params": { 00:05:52.961 "trtype": "pcie", 00:05:52.961 "traddr": "0000:00:10.0", 00:05:52.961 "name": "Nvme0" 00:05:52.961 }, 00:05:52.961 "method": "bdev_nvme_attach_controller" 00:05:52.961 }, 00:05:52.961 { 00:05:52.961 "method": "bdev_wait_for_examine" 00:05:52.961 } 00:05:52.961 ] 00:05:52.961 } 00:05:52.961 ] 00:05:52.961 } 00:05:52.961 [2024-11-04 09:55:25.088361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.219 [2024-11-04 09:55:25.144299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.219 [2024-11-04 09:55:25.198335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.219  [2024-11-04T09:55:25.646Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:53.476 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.476 09:55:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.476 { 00:05:53.476 "subsystems": [ 00:05:53.476 { 00:05:53.476 "subsystem": "bdev", 00:05:53.476 "config": [ 00:05:53.477 { 00:05:53.477 "params": { 00:05:53.477 "trtype": "pcie", 00:05:53.477 "traddr": "0000:00:10.0", 00:05:53.477 "name": "Nvme0" 00:05:53.477 }, 00:05:53.477 "method": "bdev_nvme_attach_controller" 00:05:53.477 }, 00:05:53.477 { 00:05:53.477 "method": "bdev_wait_for_examine" 00:05:53.477 } 00:05:53.477 ] 00:05:53.477 } 00:05:53.477 ] 00:05:53.477 } 00:05:53.477 [2024-11-04 09:55:25.564702] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:53.477 [2024-11-04 09:55:25.564808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:05:53.734 [2024-11-04 09:55:25.713983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.734 [2024-11-04 09:55:25.775189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.734 [2024-11-04 09:55:25.829844] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.992  [2024-11-04T09:55:26.162Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:53.992 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:53.992 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.558 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:54.558 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:54.815 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.815 09:55:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.815 [2024-11-04 09:55:26.777819] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:54.816 [2024-11-04 09:55:26.778146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:05:54.816 { 00:05:54.816 "subsystems": [ 00:05:54.816 { 00:05:54.816 "subsystem": "bdev", 00:05:54.816 "config": [ 00:05:54.816 { 00:05:54.816 "params": { 00:05:54.816 "trtype": "pcie", 00:05:54.816 "traddr": "0000:00:10.0", 00:05:54.816 "name": "Nvme0" 00:05:54.816 }, 00:05:54.816 "method": "bdev_nvme_attach_controller" 00:05:54.816 }, 00:05:54.816 { 00:05:54.816 "method": "bdev_wait_for_examine" 00:05:54.816 } 00:05:54.816 ] 00:05:54.816 } 00:05:54.816 ] 00:05:54.816 } 00:05:54.816 [2024-11-04 09:55:26.923686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.816 [2024-11-04 09:55:26.979211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.074 [2024-11-04 09:55:27.033376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.074  [2024-11-04T09:55:27.501Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:55.331 00:05:55.331 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:55.331 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:55.331 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.331 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.331 { 00:05:55.331 "subsystems": [ 00:05:55.331 { 00:05:55.331 "subsystem": "bdev", 00:05:55.331 "config": [ 00:05:55.331 { 00:05:55.331 "params": { 00:05:55.331 "trtype": "pcie", 00:05:55.331 "traddr": "0000:00:10.0", 00:05:55.331 "name": "Nvme0" 00:05:55.331 }, 00:05:55.331 "method": "bdev_nvme_attach_controller" 00:05:55.331 }, 00:05:55.331 { 00:05:55.331 "method": "bdev_wait_for_examine" 00:05:55.331 } 00:05:55.331 ] 00:05:55.331 } 00:05:55.331 ] 00:05:55.331 } 00:05:55.331 [2024-11-04 09:55:27.431282] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:55.331 [2024-11-04 09:55:27.431383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:05:55.589 [2024-11-04 09:55:27.577315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.589 [2024-11-04 09:55:27.629590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.589 [2024-11-04 09:55:27.682220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.847  [2024-11-04T09:55:28.017Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:55.847 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.847 09:55:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.105 { 00:05:56.105 "subsystems": [ 00:05:56.105 { 00:05:56.105 "subsystem": "bdev", 00:05:56.105 "config": [ 00:05:56.105 { 00:05:56.105 "params": { 00:05:56.105 "trtype": "pcie", 00:05:56.105 "traddr": "0000:00:10.0", 00:05:56.105 "name": "Nvme0" 00:05:56.105 }, 00:05:56.105 "method": "bdev_nvme_attach_controller" 00:05:56.105 }, 00:05:56.105 { 00:05:56.105 "method": "bdev_wait_for_examine" 00:05:56.105 } 00:05:56.105 ] 00:05:56.105 } 00:05:56.105 ] 00:05:56.105 } 00:05:56.105 [2024-11-04 09:55:28.049927] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:56.105 [2024-11-04 09:55:28.050058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59720 ] 00:05:56.105 [2024-11-04 09:55:28.197573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.105 [2024-11-04 09:55:28.257613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.363 [2024-11-04 09:55:28.314878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.363  [2024-11-04T09:55:28.790Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:56.620 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:56.621 09:55:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.186 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:57.186 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:57.186 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.186 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.186 [2024-11-04 09:55:29.227010] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:57.186 [2024-11-04 09:55:29.227351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:{ 00:05:57.186 "subsystems": [ 00:05:57.186 { 00:05:57.186 "subsystem": "bdev", 00:05:57.186 "config": [ 00:05:57.186 { 00:05:57.186 "params": { 00:05:57.186 "trtype": "pcie", 00:05:57.186 "traddr": "0000:00:10.0", 00:05:57.186 "name": "Nvme0" 00:05:57.186 }, 00:05:57.186 "method": "bdev_nvme_attach_controller" 00:05:57.186 }, 00:05:57.186 { 00:05:57.186 "method": "bdev_wait_for_examine" 00:05:57.186 } 00:05:57.186 ] 00:05:57.186 } 00:05:57.186 ] 00:05:57.186 } 00:05:57.186 6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:05:57.444 [2024-11-04 09:55:29.377069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.444 [2024-11-04 09:55:29.440133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.444 [2024-11-04 09:55:29.494367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.444  [2024-11-04T09:55:29.871Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:57.701 00:05:57.701 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:57.701 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:57.701 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.701 09:55:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.701 [2024-11-04 09:55:29.862686] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:57.701 [2024-11-04 09:55:29.862806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:05:57.701 { 00:05:57.701 "subsystems": [ 00:05:57.701 { 00:05:57.701 "subsystem": "bdev", 00:05:57.701 "config": [ 00:05:57.701 { 00:05:57.701 "params": { 00:05:57.701 "trtype": "pcie", 00:05:57.701 "traddr": "0000:00:10.0", 00:05:57.701 "name": "Nvme0" 00:05:57.701 }, 00:05:57.701 "method": "bdev_nvme_attach_controller" 00:05:57.701 }, 00:05:57.701 { 00:05:57.701 "method": "bdev_wait_for_examine" 00:05:57.701 } 00:05:57.701 ] 00:05:57.701 } 00:05:57.701 ] 00:05:57.701 } 00:05:57.957 [2024-11-04 09:55:30.010120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.957 [2024-11-04 09:55:30.056206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.957 [2024-11-04 09:55:30.110737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.215  [2024-11-04T09:55:30.642Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:58.472 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.472 09:55:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.472 { 00:05:58.472 "subsystems": [ 00:05:58.472 { 00:05:58.472 "subsystem": "bdev", 00:05:58.472 "config": [ 00:05:58.472 { 00:05:58.472 "params": { 00:05:58.472 "trtype": "pcie", 00:05:58.472 "traddr": "0000:00:10.0", 00:05:58.472 "name": "Nvme0" 00:05:58.472 }, 00:05:58.472 "method": "bdev_nvme_attach_controller" 00:05:58.472 }, 00:05:58.472 { 00:05:58.472 "method": "bdev_wait_for_examine" 00:05:58.472 } 00:05:58.472 ] 00:05:58.472 } 00:05:58.472 ] 00:05:58.472 } 00:05:58.472 [2024-11-04 09:55:30.474361] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:58.472 [2024-11-04 09:55:30.474461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59768 ] 00:05:58.472 [2024-11-04 09:55:30.626846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.729 [2024-11-04 09:55:30.709649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.729 [2024-11-04 09:55:30.768031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.729  [2024-11-04T09:55:31.156Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:58.986 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:58.986 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:59.551 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:59.551 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:59.551 09:55:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:59.551 [2024-11-04 09:55:31.704959] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:05:59.551 [2024-11-04 09:55:31.705656] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59787 ] 00:05:59.551 { 00:05:59.551 "subsystems": [ 00:05:59.551 { 00:05:59.551 "subsystem": "bdev", 00:05:59.551 "config": [ 00:05:59.551 { 00:05:59.551 "params": { 00:05:59.551 "trtype": "pcie", 00:05:59.551 "traddr": "0000:00:10.0", 00:05:59.551 "name": "Nvme0" 00:05:59.551 }, 00:05:59.551 "method": "bdev_nvme_attach_controller" 00:05:59.551 }, 00:05:59.551 { 00:05:59.551 "method": "bdev_wait_for_examine" 00:05:59.551 } 00:05:59.551 ] 00:05:59.551 } 00:05:59.551 ] 00:05:59.551 } 00:05:59.861 [2024-11-04 09:55:31.850719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.861 [2024-11-04 09:55:31.912550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.861 [2024-11-04 09:55:31.967202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.118  [2024-11-04T09:55:32.288Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:00.118 00:06:00.118 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:00.118 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:00.118 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.118 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.376 { 00:06:00.376 "subsystems": [ 00:06:00.376 { 00:06:00.376 "subsystem": "bdev", 00:06:00.376 "config": [ 00:06:00.376 { 00:06:00.376 "params": { 00:06:00.376 "trtype": "pcie", 00:06:00.376 "traddr": "0000:00:10.0", 00:06:00.376 "name": "Nvme0" 00:06:00.376 }, 00:06:00.376 "method": "bdev_nvme_attach_controller" 00:06:00.376 }, 00:06:00.376 { 00:06:00.376 "method": "bdev_wait_for_examine" 00:06:00.376 } 00:06:00.376 ] 00:06:00.376 } 00:06:00.376 ] 00:06:00.376 } 00:06:00.376 [2024-11-04 09:55:32.326119] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:00.376 [2024-11-04 09:55:32.326222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59806 ] 00:06:00.376 [2024-11-04 09:55:32.472173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.376 [2024-11-04 09:55:32.532469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.633 [2024-11-04 09:55:32.586227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.633  [2024-11-04T09:55:33.061Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:00.891 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.891 09:55:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.891 { 00:06:00.891 "subsystems": [ 00:06:00.891 { 00:06:00.891 "subsystem": "bdev", 00:06:00.891 "config": [ 00:06:00.891 { 00:06:00.891 "params": { 00:06:00.891 "trtype": "pcie", 00:06:00.891 "traddr": "0000:00:10.0", 00:06:00.891 "name": "Nvme0" 00:06:00.891 }, 00:06:00.891 "method": "bdev_nvme_attach_controller" 00:06:00.891 }, 00:06:00.891 { 00:06:00.891 "method": "bdev_wait_for_examine" 00:06:00.891 } 00:06:00.891 ] 00:06:00.891 } 00:06:00.891 ] 00:06:00.891 } 00:06:00.891 [2024-11-04 09:55:32.945407] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:00.891 [2024-11-04 09:55:32.945513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59816 ] 00:06:01.150 [2024-11-04 09:55:33.087680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.150 [2024-11-04 09:55:33.148041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.150 [2024-11-04 09:55:33.199676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.150  [2024-11-04T09:55:33.578Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:01.408 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:01.408 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.974 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:01.974 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:01.974 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:01.974 09:55:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:01.974 { 00:06:01.974 "subsystems": [ 00:06:01.974 { 00:06:01.974 "subsystem": "bdev", 00:06:01.974 "config": [ 00:06:01.974 { 00:06:01.974 "params": { 00:06:01.974 "trtype": "pcie", 00:06:01.974 "traddr": "0000:00:10.0", 00:06:01.974 "name": "Nvme0" 00:06:01.974 }, 00:06:01.974 "method": "bdev_nvme_attach_controller" 00:06:01.974 }, 00:06:01.974 { 00:06:01.974 "method": "bdev_wait_for_examine" 00:06:01.974 } 00:06:01.974 ] 00:06:01.974 } 00:06:01.974 ] 00:06:01.974 } 00:06:01.974 [2024-11-04 09:55:34.070031] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:01.974 [2024-11-04 09:55:34.070484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59835 ] 00:06:02.231 [2024-11-04 09:55:34.221691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.231 [2024-11-04 09:55:34.282347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.231 [2024-11-04 09:55:34.335457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.489  [2024-11-04T09:55:34.659Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:02.489 00:06:02.489 09:55:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:02.489 09:55:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:02.489 09:55:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:02.489 09:55:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:02.747 { 00:06:02.747 "subsystems": [ 00:06:02.747 { 00:06:02.747 "subsystem": "bdev", 00:06:02.747 "config": [ 00:06:02.747 { 00:06:02.747 "params": { 00:06:02.747 "trtype": "pcie", 00:06:02.747 "traddr": "0000:00:10.0", 00:06:02.747 "name": "Nvme0" 00:06:02.747 }, 00:06:02.747 "method": "bdev_nvme_attach_controller" 00:06:02.747 }, 00:06:02.747 { 00:06:02.747 "method": "bdev_wait_for_examine" 00:06:02.747 } 00:06:02.747 ] 00:06:02.747 } 00:06:02.747 ] 00:06:02.747 } 00:06:02.747 [2024-11-04 09:55:34.690329] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:02.747 [2024-11-04 09:55:34.690431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:06:02.747 [2024-11-04 09:55:34.837321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.747 [2024-11-04 09:55:34.895903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.005 [2024-11-04 09:55:34.952075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.005  [2024-11-04T09:55:35.433Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:03.263 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:03.263 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:03.263 { 00:06:03.263 "subsystems": [ 00:06:03.263 { 00:06:03.263 "subsystem": "bdev", 00:06:03.263 "config": [ 00:06:03.263 { 00:06:03.263 "params": { 00:06:03.263 "trtype": "pcie", 00:06:03.263 "traddr": "0000:00:10.0", 00:06:03.263 "name": "Nvme0" 00:06:03.263 }, 00:06:03.263 "method": "bdev_nvme_attach_controller" 00:06:03.263 }, 00:06:03.263 { 00:06:03.263 "method": "bdev_wait_for_examine" 00:06:03.263 } 00:06:03.263 ] 00:06:03.263 } 00:06:03.263 ] 00:06:03.263 } 00:06:03.263 [2024-11-04 09:55:35.321285] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:03.263 [2024-11-04 09:55:35.321396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59875 ] 00:06:03.521 [2024-11-04 09:55:35.470568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.521 [2024-11-04 09:55:35.525605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.521 [2024-11-04 09:55:35.577975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.521  [2024-11-04T09:55:35.949Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:03.779 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:03.779 09:55:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.344 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:04.344 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:04.344 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.344 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.344 [2024-11-04 09:55:36.405745] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:04.344 [2024-11-04 09:55:36.406349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59894 ] 00:06:04.344 { 00:06:04.344 "subsystems": [ 00:06:04.344 { 00:06:04.344 "subsystem": "bdev", 00:06:04.344 "config": [ 00:06:04.344 { 00:06:04.344 "params": { 00:06:04.344 "trtype": "pcie", 00:06:04.344 "traddr": "0000:00:10.0", 00:06:04.344 "name": "Nvme0" 00:06:04.344 }, 00:06:04.344 "method": "bdev_nvme_attach_controller" 00:06:04.344 }, 00:06:04.344 { 00:06:04.344 "method": "bdev_wait_for_examine" 00:06:04.344 } 00:06:04.344 ] 00:06:04.344 } 00:06:04.344 ] 00:06:04.344 } 00:06:04.601 [2024-11-04 09:55:36.555289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.601 [2024-11-04 09:55:36.603161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.601 [2024-11-04 09:55:36.657622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.601  [2024-11-04T09:55:37.030Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:04.860 00:06:04.860 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:04.860 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:04.860 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:04.860 09:55:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:04.860 [2024-11-04 09:55:37.008058] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:04.860 [2024-11-04 09:55:37.008160] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59902 ] 00:06:04.860 { 00:06:04.860 "subsystems": [ 00:06:04.860 { 00:06:04.860 "subsystem": "bdev", 00:06:04.860 "config": [ 00:06:04.860 { 00:06:04.860 "params": { 00:06:04.860 "trtype": "pcie", 00:06:04.860 "traddr": "0000:00:10.0", 00:06:04.860 "name": "Nvme0" 00:06:04.860 }, 00:06:04.860 "method": "bdev_nvme_attach_controller" 00:06:04.860 }, 00:06:04.860 { 00:06:04.860 "method": "bdev_wait_for_examine" 00:06:04.860 } 00:06:04.860 ] 00:06:04.860 } 00:06:04.860 ] 00:06:04.860 } 00:06:05.118 [2024-11-04 09:55:37.154781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.118 [2024-11-04 09:55:37.217316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.118 [2024-11-04 09:55:37.270571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.376  [2024-11-04T09:55:37.804Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:05.634 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:05.634 09:55:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:05.634 { 00:06:05.634 "subsystems": [ 00:06:05.634 { 00:06:05.634 "subsystem": "bdev", 00:06:05.634 "config": [ 00:06:05.634 { 00:06:05.634 "params": { 00:06:05.634 "trtype": "pcie", 00:06:05.634 "traddr": "0000:00:10.0", 00:06:05.634 "name": "Nvme0" 00:06:05.634 }, 00:06:05.634 "method": "bdev_nvme_attach_controller" 00:06:05.634 }, 00:06:05.634 { 00:06:05.634 "method": "bdev_wait_for_examine" 00:06:05.634 } 00:06:05.634 ] 00:06:05.634 } 00:06:05.634 ] 00:06:05.634 } 00:06:05.634 [2024-11-04 09:55:37.642642] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:05.634 [2024-11-04 09:55:37.642754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:06:05.634 [2024-11-04 09:55:37.789347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.892 [2024-11-04 09:55:37.848642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.892 [2024-11-04 09:55:37.905314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.892  [2024-11-04T09:55:38.320Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:06.150 00:06:06.150 00:06:06.150 real 0m14.563s 00:06:06.150 user 0m10.717s 00:06:06.150 sys 0m5.390s 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.150 ************************************ 00:06:06.150 END TEST dd_rw 00:06:06.150 ************************************ 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:06.150 ************************************ 00:06:06.150 START TEST dd_rw_offset 00:06:06.150 ************************************ 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:06.150 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.408 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:06.408 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=rdl8s5ic2c6jvdzxmyoqhtd04we3fr5275n7wlgorvcpu8sgbu8zh7ixpj1e0fszmqbj1y1p4ebk35rz2drmr85izrrzrecbkx8ekzyinary3kgebudht31dvpud4jfi8q3kmkkimxo5pjdknpcv2apfb3p9ytvg27nxcalcbdow0ppdl5zundviptxbkn6712ni2n1wihykrmpsw029y6tz58bjmlnly5yqdnei3mlidtdcp11f4t5q53frq26afvzcva2alsqm90zsb304tfeje3hnajnks8p23pm9a6p8rhvpiyeow2wc1dn1vthitbzg40k167pauph0xhlxem2940tvx00zcrfoa6v2v94d59upqk2dpkzlgb5giwi8eag7hb2tdmt3iumzf835u8p99zcexwfmtlypbd5mr9wa0ecsrcv28mbasaptkdzx98tads67c1e8dhqm5w09fe8r53fqpucnv1xuorb624pf7hxn25yk6xzphdbsf8d6lgtxjpis9sdj25raewb2xmyrncemloolfat515cham644y7jqyx2ee9o09fzq2n81xk9uezlx5e5qa5vs5xt7afeyki1e8qxlm5uz6jr3wq1zgk5fjhuziw9bhkgdyi85aaqyd8jzsha1zk2igwhwike3w1niithbt93q0htu210zrv32z6cjlsr99cwjb53syw3eg1tzeob3hjhv5cbysnmwun75hdth4zm67rnzva5h0ruex0zmkz3m6tkanliiw02pu2wyx9b1q0aqcq16hlsb72vk39vcfawr0oeev4eytchimzugyxgcrmidb1ct6ukgm3gk6qt3329njr2yntz44uarwo55xql97hlxouupxqvckrghncudvsamjnfi0frcthyq6xmrp8oi6swibswfrlyj5qxiuz2lv1lea826c29kcth1du9fcgtwivmvj9nkplfjpj1855pcsxmfzmecu4bxsyxybm5t88535vte1mlxtmk74uikmqxqsifmd9j9gj0efe2bf2fdsy8yimy483vho6fw8hkqzfpdrjho1mgq6qtaavkw9vgpf7vfkn5ea9g2y2borqpk8llugmqb0yxa0r5m93m4b582yn7so54syfnt188quxxljveco587rle2vff5ofm4c7htf7mtp8zxmj35785sffg6k05vhj0buzmxzju1q6efp8r60cc0rj5aze781pr3v8aaefa4m4liaqe799xk3iq24y3d3c5cqtmdbtwps5v4prbwx2o8v70tc715dvnzikmnpr3e1ibr6qyv2rauljdkkhz7o2fc94o9h8rhdsqyfet40q6k9dzn5qxb9uz2zdq2s5zqtpgs0o9tsxu31q799cp3e089y3phfwmt67xyv6qfn3i9uerg8ewvraqrb1g93om8uuzhayd4o1yjumnypvp8927zmcxw6inkgh402ai9dpw9po2z5yqnsv9oby5rzpmyhsxj37mtjyben65068b9hznbxygxz5pxvvsfg86q23yypi4dg0zmfvvylvhdd9inq0yde6v1tbajwmc9npwjpvioy5mskn55ajflgbfka0frg5ajgkf83twsh3tzwfs4tw6e936siwof979ybzruj9eho1u5d4fy88lp5fxkaik7e9eoscy4x6nxtg0h1mttsx0lhq0p2nn3u5gtmyrlc4maus13u5ixwwxrewivjtx8itgtwn7uftl43hw7ygw3tr51nzhj2qhrr748psz1u1cjcuj05ehnx0s7gc2wtkei89jcvh252fjh4eaqi7op35ntj5xu8h0r9fr96hvsszgbftkb1hnqrzginltxunvc4liwy6o8ntdczybvwl3o32my9fguv82nbmilebadlil6r7jbxn0pg7dnej7jqiumog1a46s649rqselsqub8wzavqmqpf3scm2es80qjo4dmabcr9t0t3ivuoekldjc3gdj7kr3kp1t4nhfacoygbn8ds4cw8fjplb88rcy0gr5yodhrktjaoe13l0dgu46h2pocvz1dpepa4u6ovsfh3e2htqwx71ze8io7cm9nv1s10fplunk1zic1gfcbxe1rlkwshokb7acpgreqsfb9cyx6bfmij1rfd25seltgz9dh0t8ph3nd0291zv432xbi6pvrb57zpey4mzl9tjxkaj3b0cr98k5u45i43habf6baa4qd5pg45aiaqpifyauaxxjksj2bpwbi0e9twqouc64c6mfp24vhc0yx4qtxrcfn0hxw3bxwvc37al1utb8nunaorylwf40kwueuzyqs24ybbij5wutfm5p1rc81age6lw7aziz75ufatjh0hcdwuj7eh1swd28uulw3ype11hca7e2et8gu80vwdfe3zjaqqdc6vzumu30pvhpydiwjtm1b1efczr73nx1jy7jxfgmz9n6a117s0d70a7vt0hhplhklbsgt7klpo7h5ynpa1lxx03mi72oo0jh4hjma1ga6vaicwpj74en1euwxutqrbhuyxzs6zlufudltq8sgm3gon8gmsma7fxzkrrjw990mhqp64dn8inohz3qdl4eaxn5ql3cf2kjmvs4wid7nod1bvhguk4wpl7v3jfaet649m9k5x3hkfhpdb5unbsboqhxurv9s1ho5nn2olheq337q0q86crukmqobxyi9zdhy3zp5anwamab0o3e4e1i8o6580agzwx4t2jc0nnkjqv1j7yeqoxheczfg03tflglydiduvan1voxw8lxjo0v10dyw34izayhmw81v2cbqarot68nx8v3i6ir0qjxspofxrbnignk0hblci3lnaag2ufi2e2nt0vtgxrs2i48g6nvgchot58no5d6mjssg7cn3nvx6i5f1nmqj9d40yqdlat1tq3pxjjr5tizlh7fm8tfvw38x0kdju3toinnivmb46ox7wp6nfvqkhitpzvz9itdetmu7fovkp47l4om23lttv2c7wr9dtjel0hy72ve56ypyyuifxz8nk84ahenq6ohgvgyyu1auwz47yoioivfg3kempwgpcr51ukf1s8jptrm1p19du5x77q4pfecukybwdjb5pra4c6fjjzk16owrr7ojjvvp5545ab9wwj3rwi0y1edn38idja24x77shmqpm48ky77aggw50rvjz083gt45i2086em76gpcjuuyzzkpcg4khoi2h0wnzaxel0vr6n8wbfvwla41mmc4ybj4ikouarmsdw0g736vno941rrjrlzyoj1o366k8n8332nrjzxfn1kbheit161g3hjfjhjb8ki49nh6gu92b17e1wxir1qrfo4jz9z3tkncjfj2pdlaagrzf87mbic928yfweifoaxa00i19t1yzgz0khz2kfvf289kbxhhtabeivi73eyeiuj1hqkj7avc8z3wogijs6eiu0f4kidfxohe4t3otskldyrx8h634snh93wjy06cza5uxfw6tqiinx8lvttno64if0bx90n1a0jyq3844nzqaukpbfi5bvvhhheixpvddigmwkuaom081hujhji7gde3pjmyrgspet0gcnqjyfg2jngl695qtqn24zbadvvmod6m5ywt0xde4p88uj6ir3ahfn5xxu12khgxkyavi1z3zw2r63i3g3f2tjxnx8f6i2l8wukjggpg9q73k9oucknzbl53vv144izeld18e6as9ewzac4npww3nuhibskowy3rbqoyl7udtjknqp3l2gr28g2w544mrspk4qkqup8qudavr3efmbk3w9qvgqbrrcxzl1966xzigltxxdzp2qc1v5eq4xvejoyyneiiccp2wzl76akqbso6ofu1jzikicx6ym9bfvamuxk4y6zyhfcp138djtff6afryewurh2gphurq28fg1ox6ko79dq9lw27soxsk20r5aw082htxf4aahkwgqenp9hh4oe3kg3gt6upt4nd9srr0ke2iskzn3k6enbcsh1b3v3cw2d7uex6l2yvreljeb1f7v74hstlyo8i8hungt29me1ik8mzu0z5i3p 00:06:06.408 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:06.408 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:06.408 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:06.408 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.408 { 00:06:06.408 "subsystems": [ 00:06:06.408 { 00:06:06.408 "subsystem": "bdev", 00:06:06.408 "config": [ 00:06:06.408 { 00:06:06.408 "params": { 00:06:06.408 "trtype": "pcie", 00:06:06.408 "traddr": "0000:00:10.0", 00:06:06.408 "name": "Nvme0" 00:06:06.408 }, 00:06:06.408 "method": "bdev_nvme_attach_controller" 00:06:06.408 }, 00:06:06.408 { 00:06:06.408 "method": "bdev_wait_for_examine" 00:06:06.408 } 00:06:06.408 ] 00:06:06.408 } 00:06:06.408 ] 00:06:06.408 } 00:06:06.408 [2024-11-04 09:55:38.377494] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:06.408 [2024-11-04 09:55:38.377623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59959 ] 00:06:06.408 [2024-11-04 09:55:38.526313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.666 [2024-11-04 09:55:38.588503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.666 [2024-11-04 09:55:38.643550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.666  [2024-11-04T09:55:39.093Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:06.923 00:06:06.923 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:06.923 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:06.923 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:06.923 09:55:38 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:06.923 { 00:06:06.923 "subsystems": [ 00:06:06.923 { 00:06:06.923 "subsystem": "bdev", 00:06:06.923 "config": [ 00:06:06.923 { 00:06:06.923 "params": { 00:06:06.923 "trtype": "pcie", 00:06:06.923 "traddr": "0000:00:10.0", 00:06:06.923 "name": "Nvme0" 00:06:06.923 }, 00:06:06.923 "method": "bdev_nvme_attach_controller" 00:06:06.923 }, 00:06:06.923 { 00:06:06.923 "method": "bdev_wait_for_examine" 00:06:06.923 } 00:06:06.923 ] 00:06:06.923 } 00:06:06.923 ] 00:06:06.923 } 00:06:06.923 [2024-11-04 09:55:39.000444] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:06.923 [2024-11-04 09:55:39.000560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59969 ] 00:06:07.180 [2024-11-04 09:55:39.143891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.180 [2024-11-04 09:55:39.205944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.180 [2024-11-04 09:55:39.261630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.438  [2024-11-04T09:55:39.608Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:07.438 00:06:07.438 09:55:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ rdl8s5ic2c6jvdzxmyoqhtd04we3fr5275n7wlgorvcpu8sgbu8zh7ixpj1e0fszmqbj1y1p4ebk35rz2drmr85izrrzrecbkx8ekzyinary3kgebudht31dvpud4jfi8q3kmkkimxo5pjdknpcv2apfb3p9ytvg27nxcalcbdow0ppdl5zundviptxbkn6712ni2n1wihykrmpsw029y6tz58bjmlnly5yqdnei3mlidtdcp11f4t5q53frq26afvzcva2alsqm90zsb304tfeje3hnajnks8p23pm9a6p8rhvpiyeow2wc1dn1vthitbzg40k167pauph0xhlxem2940tvx00zcrfoa6v2v94d59upqk2dpkzlgb5giwi8eag7hb2tdmt3iumzf835u8p99zcexwfmtlypbd5mr9wa0ecsrcv28mbasaptkdzx98tads67c1e8dhqm5w09fe8r53fqpucnv1xuorb624pf7hxn25yk6xzphdbsf8d6lgtxjpis9sdj25raewb2xmyrncemloolfat515cham644y7jqyx2ee9o09fzq2n81xk9uezlx5e5qa5vs5xt7afeyki1e8qxlm5uz6jr3wq1zgk5fjhuziw9bhkgdyi85aaqyd8jzsha1zk2igwhwike3w1niithbt93q0htu210zrv32z6cjlsr99cwjb53syw3eg1tzeob3hjhv5cbysnmwun75hdth4zm67rnzva5h0ruex0zmkz3m6tkanliiw02pu2wyx9b1q0aqcq16hlsb72vk39vcfawr0oeev4eytchimzugyxgcrmidb1ct6ukgm3gk6qt3329njr2yntz44uarwo55xql97hlxouupxqvckrghncudvsamjnfi0frcthyq6xmrp8oi6swibswfrlyj5qxiuz2lv1lea826c29kcth1du9fcgtwivmvj9nkplfjpj1855pcsxmfzmecu4bxsyxybm5t88535vte1mlxtmk74uikmqxqsifmd9j9gj0efe2bf2fdsy8yimy483vho6fw8hkqzfpdrjho1mgq6qtaavkw9vgpf7vfkn5ea9g2y2borqpk8llugmqb0yxa0r5m93m4b582yn7so54syfnt188quxxljveco587rle2vff5ofm4c7htf7mtp8zxmj35785sffg6k05vhj0buzmxzju1q6efp8r60cc0rj5aze781pr3v8aaefa4m4liaqe799xk3iq24y3d3c5cqtmdbtwps5v4prbwx2o8v70tc715dvnzikmnpr3e1ibr6qyv2rauljdkkhz7o2fc94o9h8rhdsqyfet40q6k9dzn5qxb9uz2zdq2s5zqtpgs0o9tsxu31q799cp3e089y3phfwmt67xyv6qfn3i9uerg8ewvraqrb1g93om8uuzhayd4o1yjumnypvp8927zmcxw6inkgh402ai9dpw9po2z5yqnsv9oby5rzpmyhsxj37mtjyben65068b9hznbxygxz5pxvvsfg86q23yypi4dg0zmfvvylvhdd9inq0yde6v1tbajwmc9npwjpvioy5mskn55ajflgbfka0frg5ajgkf83twsh3tzwfs4tw6e936siwof979ybzruj9eho1u5d4fy88lp5fxkaik7e9eoscy4x6nxtg0h1mttsx0lhq0p2nn3u5gtmyrlc4maus13u5ixwwxrewivjtx8itgtwn7uftl43hw7ygw3tr51nzhj2qhrr748psz1u1cjcuj05ehnx0s7gc2wtkei89jcvh252fjh4eaqi7op35ntj5xu8h0r9fr96hvsszgbftkb1hnqrzginltxunvc4liwy6o8ntdczybvwl3o32my9fguv82nbmilebadlil6r7jbxn0pg7dnej7jqiumog1a46s649rqselsqub8wzavqmqpf3scm2es80qjo4dmabcr9t0t3ivuoekldjc3gdj7kr3kp1t4nhfacoygbn8ds4cw8fjplb88rcy0gr5yodhrktjaoe13l0dgu46h2pocvz1dpepa4u6ovsfh3e2htqwx71ze8io7cm9nv1s10fplunk1zic1gfcbxe1rlkwshokb7acpgreqsfb9cyx6bfmij1rfd25seltgz9dh0t8ph3nd0291zv432xbi6pvrb57zpey4mzl9tjxkaj3b0cr98k5u45i43habf6baa4qd5pg45aiaqpifyauaxxjksj2bpwbi0e9twqouc64c6mfp24vhc0yx4qtxrcfn0hxw3bxwvc37al1utb8nunaorylwf40kwueuzyqs24ybbij5wutfm5p1rc81age6lw7aziz75ufatjh0hcdwuj7eh1swd28uulw3ype11hca7e2et8gu80vwdfe3zjaqqdc6vzumu30pvhpydiwjtm1b1efczr73nx1jy7jxfgmz9n6a117s0d70a7vt0hhplhklbsgt7klpo7h5ynpa1lxx03mi72oo0jh4hjma1ga6vaicwpj74en1euwxutqrbhuyxzs6zlufudltq8sgm3gon8gmsma7fxzkrrjw990mhqp64dn8inohz3qdl4eaxn5ql3cf2kjmvs4wid7nod1bvhguk4wpl7v3jfaet649m9k5x3hkfhpdb5unbsboqhxurv9s1ho5nn2olheq337q0q86crukmqobxyi9zdhy3zp5anwamab0o3e4e1i8o6580agzwx4t2jc0nnkjqv1j7yeqoxheczfg03tflglydiduvan1voxw8lxjo0v10dyw34izayhmw81v2cbqarot68nx8v3i6ir0qjxspofxrbnignk0hblci3lnaag2ufi2e2nt0vtgxrs2i48g6nvgchot58no5d6mjssg7cn3nvx6i5f1nmqj9d40yqdlat1tq3pxjjr5tizlh7fm8tfvw38x0kdju3toinnivmb46ox7wp6nfvqkhitpzvz9itdetmu7fovkp47l4om23lttv2c7wr9dtjel0hy72ve56ypyyuifxz8nk84ahenq6ohgvgyyu1auwz47yoioivfg3kempwgpcr51ukf1s8jptrm1p19du5x77q4pfecukybwdjb5pra4c6fjjzk16owrr7ojjvvp5545ab9wwj3rwi0y1edn38idja24x77shmqpm48ky77aggw50rvjz083gt45i2086em76gpcjuuyzzkpcg4khoi2h0wnzaxel0vr6n8wbfvwla41mmc4ybj4ikouarmsdw0g736vno941rrjrlzyoj1o366k8n8332nrjzxfn1kbheit161g3hjfjhjb8ki49nh6gu92b17e1wxir1qrfo4jz9z3tkncjfj2pdlaagrzf87mbic928yfweifoaxa00i19t1yzgz0khz2kfvf289kbxhhtabeivi73eyeiuj1hqkj7avc8z3wogijs6eiu0f4kidfxohe4t3otskldyrx8h634snh93wjy06cza5uxfw6tqiinx8lvttno64if0bx90n1a0jyq3844nzqaukpbfi5bvvhhheixpvddigmwkuaom081hujhji7gde3pjmyrgspet0gcnqjyfg2jngl695qtqn24zbadvvmod6m5ywt0xde4p88uj6ir3ahfn5xxu12khgxkyavi1z3zw2r63i3g3f2tjxnx8f6i2l8wukjggpg9q73k9oucknzbl53vv144izeld18e6as9ewzac4npww3nuhibskowy3rbqoyl7udtjknqp3l2gr28g2w544mrspk4qkqup8qudavr3efmbk3w9qvgqbrrcxzl1966xzigltxxdzp2qc1v5eq4xvejoyyneiiccp2wzl76akqbso6ofu1jzikicx6ym9bfvamuxk4y6zyhfcp138djtff6afryewurh2gphurq28fg1ox6ko79dq9lw27soxsk20r5aw082htxf4aahkwgqenp9hh4oe3kg3gt6upt4nd9srr0ke2iskzn3k6enbcsh1b3v3cw2d7uex6l2yvreljeb1f7v74hstlyo8i8hungt29me1ik8mzu0z5i3p == \r\d\l\8\s\5\i\c\2\c\6\j\v\d\z\x\m\y\o\q\h\t\d\0\4\w\e\3\f\r\5\2\7\5\n\7\w\l\g\o\r\v\c\p\u\8\s\g\b\u\8\z\h\7\i\x\p\j\1\e\0\f\s\z\m\q\b\j\1\y\1\p\4\e\b\k\3\5\r\z\2\d\r\m\r\8\5\i\z\r\r\z\r\e\c\b\k\x\8\e\k\z\y\i\n\a\r\y\3\k\g\e\b\u\d\h\t\3\1\d\v\p\u\d\4\j\f\i\8\q\3\k\m\k\k\i\m\x\o\5\p\j\d\k\n\p\c\v\2\a\p\f\b\3\p\9\y\t\v\g\2\7\n\x\c\a\l\c\b\d\o\w\0\p\p\d\l\5\z\u\n\d\v\i\p\t\x\b\k\n\6\7\1\2\n\i\2\n\1\w\i\h\y\k\r\m\p\s\w\0\2\9\y\6\t\z\5\8\b\j\m\l\n\l\y\5\y\q\d\n\e\i\3\m\l\i\d\t\d\c\p\1\1\f\4\t\5\q\5\3\f\r\q\2\6\a\f\v\z\c\v\a\2\a\l\s\q\m\9\0\z\s\b\3\0\4\t\f\e\j\e\3\h\n\a\j\n\k\s\8\p\2\3\p\m\9\a\6\p\8\r\h\v\p\i\y\e\o\w\2\w\c\1\d\n\1\v\t\h\i\t\b\z\g\4\0\k\1\6\7\p\a\u\p\h\0\x\h\l\x\e\m\2\9\4\0\t\v\x\0\0\z\c\r\f\o\a\6\v\2\v\9\4\d\5\9\u\p\q\k\2\d\p\k\z\l\g\b\5\g\i\w\i\8\e\a\g\7\h\b\2\t\d\m\t\3\i\u\m\z\f\8\3\5\u\8\p\9\9\z\c\e\x\w\f\m\t\l\y\p\b\d\5\m\r\9\w\a\0\e\c\s\r\c\v\2\8\m\b\a\s\a\p\t\k\d\z\x\9\8\t\a\d\s\6\7\c\1\e\8\d\h\q\m\5\w\0\9\f\e\8\r\5\3\f\q\p\u\c\n\v\1\x\u\o\r\b\6\2\4\p\f\7\h\x\n\2\5\y\k\6\x\z\p\h\d\b\s\f\8\d\6\l\g\t\x\j\p\i\s\9\s\d\j\2\5\r\a\e\w\b\2\x\m\y\r\n\c\e\m\l\o\o\l\f\a\t\5\1\5\c\h\a\m\6\4\4\y\7\j\q\y\x\2\e\e\9\o\0\9\f\z\q\2\n\8\1\x\k\9\u\e\z\l\x\5\e\5\q\a\5\v\s\5\x\t\7\a\f\e\y\k\i\1\e\8\q\x\l\m\5\u\z\6\j\r\3\w\q\1\z\g\k\5\f\j\h\u\z\i\w\9\b\h\k\g\d\y\i\8\5\a\a\q\y\d\8\j\z\s\h\a\1\z\k\2\i\g\w\h\w\i\k\e\3\w\1\n\i\i\t\h\b\t\9\3\q\0\h\t\u\2\1\0\z\r\v\3\2\z\6\c\j\l\s\r\9\9\c\w\j\b\5\3\s\y\w\3\e\g\1\t\z\e\o\b\3\h\j\h\v\5\c\b\y\s\n\m\w\u\n\7\5\h\d\t\h\4\z\m\6\7\r\n\z\v\a\5\h\0\r\u\e\x\0\z\m\k\z\3\m\6\t\k\a\n\l\i\i\w\0\2\p\u\2\w\y\x\9\b\1\q\0\a\q\c\q\1\6\h\l\s\b\7\2\v\k\3\9\v\c\f\a\w\r\0\o\e\e\v\4\e\y\t\c\h\i\m\z\u\g\y\x\g\c\r\m\i\d\b\1\c\t\6\u\k\g\m\3\g\k\6\q\t\3\3\2\9\n\j\r\2\y\n\t\z\4\4\u\a\r\w\o\5\5\x\q\l\9\7\h\l\x\o\u\u\p\x\q\v\c\k\r\g\h\n\c\u\d\v\s\a\m\j\n\f\i\0\f\r\c\t\h\y\q\6\x\m\r\p\8\o\i\6\s\w\i\b\s\w\f\r\l\y\j\5\q\x\i\u\z\2\l\v\1\l\e\a\8\2\6\c\2\9\k\c\t\h\1\d\u\9\f\c\g\t\w\i\v\m\v\j\9\n\k\p\l\f\j\p\j\1\8\5\5\p\c\s\x\m\f\z\m\e\c\u\4\b\x\s\y\x\y\b\m\5\t\8\8\5\3\5\v\t\e\1\m\l\x\t\m\k\7\4\u\i\k\m\q\x\q\s\i\f\m\d\9\j\9\g\j\0\e\f\e\2\b\f\2\f\d\s\y\8\y\i\m\y\4\8\3\v\h\o\6\f\w\8\h\k\q\z\f\p\d\r\j\h\o\1\m\g\q\6\q\t\a\a\v\k\w\9\v\g\p\f\7\v\f\k\n\5\e\a\9\g\2\y\2\b\o\r\q\p\k\8\l\l\u\g\m\q\b\0\y\x\a\0\r\5\m\9\3\m\4\b\5\8\2\y\n\7\s\o\5\4\s\y\f\n\t\1\8\8\q\u\x\x\l\j\v\e\c\o\5\8\7\r\l\e\2\v\f\f\5\o\f\m\4\c\7\h\t\f\7\m\t\p\8\z\x\m\j\3\5\7\8\5\s\f\f\g\6\k\0\5\v\h\j\0\b\u\z\m\x\z\j\u\1\q\6\e\f\p\8\r\6\0\c\c\0\r\j\5\a\z\e\7\8\1\p\r\3\v\8\a\a\e\f\a\4\m\4\l\i\a\q\e\7\9\9\x\k\3\i\q\2\4\y\3\d\3\c\5\c\q\t\m\d\b\t\w\p\s\5\v\4\p\r\b\w\x\2\o\8\v\7\0\t\c\7\1\5\d\v\n\z\i\k\m\n\p\r\3\e\1\i\b\r\6\q\y\v\2\r\a\u\l\j\d\k\k\h\z\7\o\2\f\c\9\4\o\9\h\8\r\h\d\s\q\y\f\e\t\4\0\q\6\k\9\d\z\n\5\q\x\b\9\u\z\2\z\d\q\2\s\5\z\q\t\p\g\s\0\o\9\t\s\x\u\3\1\q\7\9\9\c\p\3\e\0\8\9\y\3\p\h\f\w\m\t\6\7\x\y\v\6\q\f\n\3\i\9\u\e\r\g\8\e\w\v\r\a\q\r\b\1\g\9\3\o\m\8\u\u\z\h\a\y\d\4\o\1\y\j\u\m\n\y\p\v\p\8\9\2\7\z\m\c\x\w\6\i\n\k\g\h\4\0\2\a\i\9\d\p\w\9\p\o\2\z\5\y\q\n\s\v\9\o\b\y\5\r\z\p\m\y\h\s\x\j\3\7\m\t\j\y\b\e\n\6\5\0\6\8\b\9\h\z\n\b\x\y\g\x\z\5\p\x\v\v\s\f\g\8\6\q\2\3\y\y\p\i\4\d\g\0\z\m\f\v\v\y\l\v\h\d\d\9\i\n\q\0\y\d\e\6\v\1\t\b\a\j\w\m\c\9\n\p\w\j\p\v\i\o\y\5\m\s\k\n\5\5\a\j\f\l\g\b\f\k\a\0\f\r\g\5\a\j\g\k\f\8\3\t\w\s\h\3\t\z\w\f\s\4\t\w\6\e\9\3\6\s\i\w\o\f\9\7\9\y\b\z\r\u\j\9\e\h\o\1\u\5\d\4\f\y\8\8\l\p\5\f\x\k\a\i\k\7\e\9\e\o\s\c\y\4\x\6\n\x\t\g\0\h\1\m\t\t\s\x\0\l\h\q\0\p\2\n\n\3\u\5\g\t\m\y\r\l\c\4\m\a\u\s\1\3\u\5\i\x\w\w\x\r\e\w\i\v\j\t\x\8\i\t\g\t\w\n\7\u\f\t\l\4\3\h\w\7\y\g\w\3\t\r\5\1\n\z\h\j\2\q\h\r\r\7\4\8\p\s\z\1\u\1\c\j\c\u\j\0\5\e\h\n\x\0\s\7\g\c\2\w\t\k\e\i\8\9\j\c\v\h\2\5\2\f\j\h\4\e\a\q\i\7\o\p\3\5\n\t\j\5\x\u\8\h\0\r\9\f\r\9\6\h\v\s\s\z\g\b\f\t\k\b\1\h\n\q\r\z\g\i\n\l\t\x\u\n\v\c\4\l\i\w\y\6\o\8\n\t\d\c\z\y\b\v\w\l\3\o\3\2\m\y\9\f\g\u\v\8\2\n\b\m\i\l\e\b\a\d\l\i\l\6\r\7\j\b\x\n\0\p\g\7\d\n\e\j\7\j\q\i\u\m\o\g\1\a\4\6\s\6\4\9\r\q\s\e\l\s\q\u\b\8\w\z\a\v\q\m\q\p\f\3\s\c\m\2\e\s\8\0\q\j\o\4\d\m\a\b\c\r\9\t\0\t\3\i\v\u\o\e\k\l\d\j\c\3\g\d\j\7\k\r\3\k\p\1\t\4\n\h\f\a\c\o\y\g\b\n\8\d\s\4\c\w\8\f\j\p\l\b\8\8\r\c\y\0\g\r\5\y\o\d\h\r\k\t\j\a\o\e\1\3\l\0\d\g\u\4\6\h\2\p\o\c\v\z\1\d\p\e\p\a\4\u\6\o\v\s\f\h\3\e\2\h\t\q\w\x\7\1\z\e\8\i\o\7\c\m\9\n\v\1\s\1\0\f\p\l\u\n\k\1\z\i\c\1\g\f\c\b\x\e\1\r\l\k\w\s\h\o\k\b\7\a\c\p\g\r\e\q\s\f\b\9\c\y\x\6\b\f\m\i\j\1\r\f\d\2\5\s\e\l\t\g\z\9\d\h\0\t\8\p\h\3\n\d\0\2\9\1\z\v\4\3\2\x\b\i\6\p\v\r\b\5\7\z\p\e\y\4\m\z\l\9\t\j\x\k\a\j\3\b\0\c\r\9\8\k\5\u\4\5\i\4\3\h\a\b\f\6\b\a\a\4\q\d\5\p\g\4\5\a\i\a\q\p\i\f\y\a\u\a\x\x\j\k\s\j\2\b\p\w\b\i\0\e\9\t\w\q\o\u\c\6\4\c\6\m\f\p\2\4\v\h\c\0\y\x\4\q\t\x\r\c\f\n\0\h\x\w\3\b\x\w\v\c\3\7\a\l\1\u\t\b\8\n\u\n\a\o\r\y\l\w\f\4\0\k\w\u\e\u\z\y\q\s\2\4\y\b\b\i\j\5\w\u\t\f\m\5\p\1\r\c\8\1\a\g\e\6\l\w\7\a\z\i\z\7\5\u\f\a\t\j\h\0\h\c\d\w\u\j\7\e\h\1\s\w\d\2\8\u\u\l\w\3\y\p\e\1\1\h\c\a\7\e\2\e\t\8\g\u\8\0\v\w\d\f\e\3\z\j\a\q\q\d\c\6\v\z\u\m\u\3\0\p\v\h\p\y\d\i\w\j\t\m\1\b\1\e\f\c\z\r\7\3\n\x\1\j\y\7\j\x\f\g\m\z\9\n\6\a\1\1\7\s\0\d\7\0\a\7\v\t\0\h\h\p\l\h\k\l\b\s\g\t\7\k\l\p\o\7\h\5\y\n\p\a\1\l\x\x\0\3\m\i\7\2\o\o\0\j\h\4\h\j\m\a\1\g\a\6\v\a\i\c\w\p\j\7\4\e\n\1\e\u\w\x\u\t\q\r\b\h\u\y\x\z\s\6\z\l\u\f\u\d\l\t\q\8\s\g\m\3\g\o\n\8\g\m\s\m\a\7\f\x\z\k\r\r\j\w\9\9\0\m\h\q\p\6\4\d\n\8\i\n\o\h\z\3\q\d\l\4\e\a\x\n\5\q\l\3\c\f\2\k\j\m\v\s\4\w\i\d\7\n\o\d\1\b\v\h\g\u\k\4\w\p\l\7\v\3\j\f\a\e\t\6\4\9\m\9\k\5\x\3\h\k\f\h\p\d\b\5\u\n\b\s\b\o\q\h\x\u\r\v\9\s\1\h\o\5\n\n\2\o\l\h\e\q\3\3\7\q\0\q\8\6\c\r\u\k\m\q\o\b\x\y\i\9\z\d\h\y\3\z\p\5\a\n\w\a\m\a\b\0\o\3\e\4\e\1\i\8\o\6\5\8\0\a\g\z\w\x\4\t\2\j\c\0\n\n\k\j\q\v\1\j\7\y\e\q\o\x\h\e\c\z\f\g\0\3\t\f\l\g\l\y\d\i\d\u\v\a\n\1\v\o\x\w\8\l\x\j\o\0\v\1\0\d\y\w\3\4\i\z\a\y\h\m\w\8\1\v\2\c\b\q\a\r\o\t\6\8\n\x\8\v\3\i\6\i\r\0\q\j\x\s\p\o\f\x\r\b\n\i\g\n\k\0\h\b\l\c\i\3\l\n\a\a\g\2\u\f\i\2\e\2\n\t\0\v\t\g\x\r\s\2\i\4\8\g\6\n\v\g\c\h\o\t\5\8\n\o\5\d\6\m\j\s\s\g\7\c\n\3\n\v\x\6\i\5\f\1\n\m\q\j\9\d\4\0\y\q\d\l\a\t\1\t\q\3\p\x\j\j\r\5\t\i\z\l\h\7\f\m\8\t\f\v\w\3\8\x\0\k\d\j\u\3\t\o\i\n\n\i\v\m\b\4\6\o\x\7\w\p\6\n\f\v\q\k\h\i\t\p\z\v\z\9\i\t\d\e\t\m\u\7\f\o\v\k\p\4\7\l\4\o\m\2\3\l\t\t\v\2\c\7\w\r\9\d\t\j\e\l\0\h\y\7\2\v\e\5\6\y\p\y\y\u\i\f\x\z\8\n\k\8\4\a\h\e\n\q\6\o\h\g\v\g\y\y\u\1\a\u\w\z\4\7\y\o\i\o\i\v\f\g\3\k\e\m\p\w\g\p\c\r\5\1\u\k\f\1\s\8\j\p\t\r\m\1\p\1\9\d\u\5\x\7\7\q\4\p\f\e\c\u\k\y\b\w\d\j\b\5\p\r\a\4\c\6\f\j\j\z\k\1\6\o\w\r\r\7\o\j\j\v\v\p\5\5\4\5\a\b\9\w\w\j\3\r\w\i\0\y\1\e\d\n\3\8\i\d\j\a\2\4\x\7\7\s\h\m\q\p\m\4\8\k\y\7\7\a\g\g\w\5\0\r\v\j\z\0\8\3\g\t\4\5\i\2\0\8\6\e\m\7\6\g\p\c\j\u\u\y\z\z\k\p\c\g\4\k\h\o\i\2\h\0\w\n\z\a\x\e\l\0\v\r\6\n\8\w\b\f\v\w\l\a\4\1\m\m\c\4\y\b\j\4\i\k\o\u\a\r\m\s\d\w\0\g\7\3\6\v\n\o\9\4\1\r\r\j\r\l\z\y\o\j\1\o\3\6\6\k\8\n\8\3\3\2\n\r\j\z\x\f\n\1\k\b\h\e\i\t\1\6\1\g\3\h\j\f\j\h\j\b\8\k\i\4\9\n\h\6\g\u\9\2\b\1\7\e\1\w\x\i\r\1\q\r\f\o\4\j\z\9\z\3\t\k\n\c\j\f\j\2\p\d\l\a\a\g\r\z\f\8\7\m\b\i\c\9\2\8\y\f\w\e\i\f\o\a\x\a\0\0\i\1\9\t\1\y\z\g\z\0\k\h\z\2\k\f\v\f\2\8\9\k\b\x\h\h\t\a\b\e\i\v\i\7\3\e\y\e\i\u\j\1\h\q\k\j\7\a\v\c\8\z\3\w\o\g\i\j\s\6\e\i\u\0\f\4\k\i\d\f\x\o\h\e\4\t\3\o\t\s\k\l\d\y\r\x\8\h\6\3\4\s\n\h\9\3\w\j\y\0\6\c\z\a\5\u\x\f\w\6\t\q\i\i\n\x\8\l\v\t\t\n\o\6\4\i\f\0\b\x\9\0\n\1\a\0\j\y\q\3\8\4\4\n\z\q\a\u\k\p\b\f\i\5\b\v\v\h\h\h\e\i\x\p\v\d\d\i\g\m\w\k\u\a\o\m\0\8\1\h\u\j\h\j\i\7\g\d\e\3\p\j\m\y\r\g\s\p\e\t\0\g\c\n\q\j\y\f\g\2\j\n\g\l\6\9\5\q\t\q\n\2\4\z\b\a\d\v\v\m\o\d\6\m\5\y\w\t\0\x\d\e\4\p\8\8\u\j\6\i\r\3\a\h\f\n\5\x\x\u\1\2\k\h\g\x\k\y\a\v\i\1\z\3\z\w\2\r\6\3\i\3\g\3\f\2\t\j\x\n\x\8\f\6\i\2\l\8\w\u\k\j\g\g\p\g\9\q\7\3\k\9\o\u\c\k\n\z\b\l\5\3\v\v\1\4\4\i\z\e\l\d\1\8\e\6\a\s\9\e\w\z\a\c\4\n\p\w\w\3\n\u\h\i\b\s\k\o\w\y\3\r\b\q\o\y\l\7\u\d\t\j\k\n\q\p\3\l\2\g\r\2\8\g\2\w\5\4\4\m\r\s\p\k\4\q\k\q\u\p\8\q\u\d\a\v\r\3\e\f\m\b\k\3\w\9\q\v\g\q\b\r\r\c\x\z\l\1\9\6\6\x\z\i\g\l\t\x\x\d\z\p\2\q\c\1\v\5\e\q\4\x\v\e\j\o\y\y\n\e\i\i\c\c\p\2\w\z\l\7\6\a\k\q\b\s\o\6\o\f\u\1\j\z\i\k\i\c\x\6\y\m\9\b\f\v\a\m\u\x\k\4\y\6\z\y\h\f\c\p\1\3\8\d\j\t\f\f\6\a\f\r\y\e\w\u\r\h\2\g\p\h\u\r\q\2\8\f\g\1\o\x\6\k\o\7\9\d\q\9\l\w\2\7\s\o\x\s\k\2\0\r\5\a\w\0\8\2\h\t\x\f\4\a\a\h\k\w\g\q\e\n\p\9\h\h\4\o\e\3\k\g\3\g\t\6\u\p\t\4\n\d\9\s\r\r\0\k\e\2\i\s\k\z\n\3\k\6\e\n\b\c\s\h\1\b\3\v\3\c\w\2\d\7\u\e\x\6\l\2\y\v\r\e\l\j\e\b\1\f\7\v\7\4\h\s\t\l\y\o\8\i\8\h\u\n\g\t\2\9\m\e\1\i\k\8\m\z\u\0\z\5\i\3\p ]] 00:06:07.439 00:06:07.439 real 0m1.288s 00:06:07.439 user 0m0.869s 00:06:07.439 sys 0m0.602s 00:06:07.439 ************************************ 00:06:07.439 END TEST dd_rw_offset 00:06:07.439 ************************************ 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:07.439 09:55:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:07.697 [2024-11-04 09:55:39.651190] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:07.697 [2024-11-04 09:55:39.651289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60004 ] 00:06:07.697 { 00:06:07.697 "subsystems": [ 00:06:07.697 { 00:06:07.697 "subsystem": "bdev", 00:06:07.697 "config": [ 00:06:07.697 { 00:06:07.697 "params": { 00:06:07.697 "trtype": "pcie", 00:06:07.697 "traddr": "0000:00:10.0", 00:06:07.697 "name": "Nvme0" 00:06:07.697 }, 00:06:07.697 "method": "bdev_nvme_attach_controller" 00:06:07.697 }, 00:06:07.697 { 00:06:07.697 "method": "bdev_wait_for_examine" 00:06:07.697 } 00:06:07.697 ] 00:06:07.697 } 00:06:07.697 ] 00:06:07.697 } 00:06:07.697 [2024-11-04 09:55:39.795903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.697 [2024-11-04 09:55:39.853807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.954 [2024-11-04 09:55:39.907024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.954  [2024-11-04T09:55:40.381Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:08.211 00:06:08.211 09:55:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.211 00:06:08.211 real 0m17.645s 00:06:08.211 user 0m12.679s 00:06:08.211 sys 0m6.633s 00:06:08.211 ************************************ 00:06:08.211 END TEST spdk_dd_basic_rw 00:06:08.211 ************************************ 00:06:08.211 09:55:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.211 09:55:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:08.211 09:55:40 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:08.211 09:55:40 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.211 09:55:40 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.211 09:55:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:08.211 ************************************ 00:06:08.211 START TEST spdk_dd_posix 00:06:08.211 ************************************ 00:06:08.211 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:08.211 * Looking for test storage... 00:06:08.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:08.211 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.211 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.211 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.471 --rc genhtml_branch_coverage=1 00:06:08.471 --rc genhtml_function_coverage=1 00:06:08.471 --rc genhtml_legend=1 00:06:08.471 --rc geninfo_all_blocks=1 00:06:08.471 --rc geninfo_unexecuted_blocks=1 00:06:08.471 00:06:08.471 ' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.471 --rc genhtml_branch_coverage=1 00:06:08.471 --rc genhtml_function_coverage=1 00:06:08.471 --rc genhtml_legend=1 00:06:08.471 --rc geninfo_all_blocks=1 00:06:08.471 --rc geninfo_unexecuted_blocks=1 00:06:08.471 00:06:08.471 ' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.471 --rc genhtml_branch_coverage=1 00:06:08.471 --rc genhtml_function_coverage=1 00:06:08.471 --rc genhtml_legend=1 00:06:08.471 --rc geninfo_all_blocks=1 00:06:08.471 --rc geninfo_unexecuted_blocks=1 00:06:08.471 00:06:08.471 ' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.471 --rc genhtml_branch_coverage=1 00:06:08.471 --rc genhtml_function_coverage=1 00:06:08.471 --rc genhtml_legend=1 00:06:08.471 --rc geninfo_all_blocks=1 00:06:08.471 --rc geninfo_unexecuted_blocks=1 00:06:08.471 00:06:08.471 ' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:08.471 * First test run, liburing in use 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.471 ************************************ 00:06:08.471 START TEST dd_flag_append 00:06:08.471 ************************************ 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=mpl10thg7er6z9v2rf3bifwkb19uuypc 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=o2hrte2gev6mpoml6ft2j5k6nc54kodw 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s mpl10thg7er6z9v2rf3bifwkb19uuypc 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s o2hrte2gev6mpoml6ft2j5k6nc54kodw 00:06:08.471 09:55:40 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:08.471 [2024-11-04 09:55:40.531154] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:08.471 [2024-11-04 09:55:40.531264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60076 ] 00:06:08.729 [2024-11-04 09:55:40.678065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.729 [2024-11-04 09:55:40.725425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.729 [2024-11-04 09:55:40.777640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.729  [2024-11-04T09:55:41.157Z] Copying: 32/32 [B] (average 31 kBps) 00:06:08.987 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ o2hrte2gev6mpoml6ft2j5k6nc54kodwmpl10thg7er6z9v2rf3bifwkb19uuypc == \o\2\h\r\t\e\2\g\e\v\6\m\p\o\m\l\6\f\t\2\j\5\k\6\n\c\5\4\k\o\d\w\m\p\l\1\0\t\h\g\7\e\r\6\z\9\v\2\r\f\3\b\i\f\w\k\b\1\9\u\u\y\p\c ]] 00:06:08.987 00:06:08.987 real 0m0.552s 00:06:08.987 user 0m0.301s 00:06:08.987 sys 0m0.268s 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:08.987 ************************************ 00:06:08.987 END TEST dd_flag_append 00:06:08.987 ************************************ 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.987 ************************************ 00:06:08.987 START TEST dd_flag_directory 00:06:08.987 ************************************ 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.987 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.987 [2024-11-04 09:55:41.135029] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:08.987 [2024-11-04 09:55:41.135133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:06:09.249 [2024-11-04 09:55:41.279223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.249 [2024-11-04 09:55:41.340726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.249 [2024-11-04 09:55:41.396564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.526 [2024-11-04 09:55:41.430641] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.526 [2024-11-04 09:55:41.430708] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.526 [2024-11-04 09:55:41.430742] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.526 [2024-11-04 09:55:41.543244] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.526 09:55:41 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.527 [2024-11-04 09:55:41.662984] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:09.527 [2024-11-04 09:55:41.663086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60114 ] 00:06:09.785 [2024-11-04 09:55:41.809512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.785 [2024-11-04 09:55:41.861020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.785 [2024-11-04 09:55:41.915145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.785 [2024-11-04 09:55:41.948802] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.785 [2024-11-04 09:55:41.948884] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.785 [2024-11-04 09:55:41.948919] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.045 [2024-11-04 09:55:42.063079] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.045 00:06:10.045 real 0m1.067s 00:06:10.045 user 0m0.582s 00:06:10.045 sys 0m0.277s 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:10.045 ************************************ 00:06:10.045 END TEST dd_flag_directory 00:06:10.045 ************************************ 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:10.045 ************************************ 00:06:10.045 START TEST dd_flag_nofollow 00:06:10.045 ************************************ 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.045 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.306 [2024-11-04 09:55:42.259992] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:10.306 [2024-11-04 09:55:42.260113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60137 ] 00:06:10.306 [2024-11-04 09:55:42.408427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.567 [2024-11-04 09:55:42.479187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.567 [2024-11-04 09:55:42.536269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.567 [2024-11-04 09:55:42.574028] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:10.567 [2024-11-04 09:55:42.574118] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:10.567 [2024-11-04 09:55:42.574153] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.567 [2024-11-04 09:55:42.698348] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:10.825 09:55:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:10.825 [2024-11-04 09:55:42.837327] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:10.825 [2024-11-04 09:55:42.837446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:06:10.825 [2024-11-04 09:55:42.985037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.083 [2024-11-04 09:55:43.042147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.083 [2024-11-04 09:55:43.093612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.083 [2024-11-04 09:55:43.128492] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:11.083 [2024-11-04 09:55:43.128565] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:11.083 [2024-11-04 09:55:43.128600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.083 [2024-11-04 09:55:43.242823] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:11.340 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.340 [2024-11-04 09:55:43.371991] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:11.340 [2024-11-04 09:55:43.372135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:06:11.598 [2024-11-04 09:55:43.515331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.598 [2024-11-04 09:55:43.570736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.599 [2024-11-04 09:55:43.623885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.599  [2024-11-04T09:55:44.026Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.856 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ wktfuvcvyxs331av8n9po53w48010kfinqv12s79its0qyn356rheqas5vdszpleosveahoqq95etewf2ktm2wp91wzc8i5yq5y552wg4f71v6iyzo8o19siujlj5o8jhk2y3ufurs7tlml1xv1j8zx1hrijhnykatc1bul7i6bv8jlfk776ymofoszabwx60rshy4u0a7c1oepsgwmzjfp6o7bjtwyv38ptp8qpnnshzbklgvdwdpwn0s6lie6f6waz0veyv48vp17r8a8ucv4gz0v20oefz0i5p86lg24bb8ynp082rttb0lgocfj1zaq5e9gttbtau036rikkssjh5z24h45kav7tpl8gudr3rfnz8lusq0gpdlx2ka3j91rd8l9vmitz1skqge4esz8cib0ucn849mmb8dz8d7prdiah3v7qmvhhoyqvvs2wrfe2bert3t6kg6p0vytdgmaw7haugo10bm91bqsjxks1hyycdibzj92mxb1s1g78 == \w\k\t\f\u\v\c\v\y\x\s\3\3\1\a\v\8\n\9\p\o\5\3\w\4\8\0\1\0\k\f\i\n\q\v\1\2\s\7\9\i\t\s\0\q\y\n\3\5\6\r\h\e\q\a\s\5\v\d\s\z\p\l\e\o\s\v\e\a\h\o\q\q\9\5\e\t\e\w\f\2\k\t\m\2\w\p\9\1\w\z\c\8\i\5\y\q\5\y\5\5\2\w\g\4\f\7\1\v\6\i\y\z\o\8\o\1\9\s\i\u\j\l\j\5\o\8\j\h\k\2\y\3\u\f\u\r\s\7\t\l\m\l\1\x\v\1\j\8\z\x\1\h\r\i\j\h\n\y\k\a\t\c\1\b\u\l\7\i\6\b\v\8\j\l\f\k\7\7\6\y\m\o\f\o\s\z\a\b\w\x\6\0\r\s\h\y\4\u\0\a\7\c\1\o\e\p\s\g\w\m\z\j\f\p\6\o\7\b\j\t\w\y\v\3\8\p\t\p\8\q\p\n\n\s\h\z\b\k\l\g\v\d\w\d\p\w\n\0\s\6\l\i\e\6\f\6\w\a\z\0\v\e\y\v\4\8\v\p\1\7\r\8\a\8\u\c\v\4\g\z\0\v\2\0\o\e\f\z\0\i\5\p\8\6\l\g\2\4\b\b\8\y\n\p\0\8\2\r\t\t\b\0\l\g\o\c\f\j\1\z\a\q\5\e\9\g\t\t\b\t\a\u\0\3\6\r\i\k\k\s\s\j\h\5\z\2\4\h\4\5\k\a\v\7\t\p\l\8\g\u\d\r\3\r\f\n\z\8\l\u\s\q\0\g\p\d\l\x\2\k\a\3\j\9\1\r\d\8\l\9\v\m\i\t\z\1\s\k\q\g\e\4\e\s\z\8\c\i\b\0\u\c\n\8\4\9\m\m\b\8\d\z\8\d\7\p\r\d\i\a\h\3\v\7\q\m\v\h\h\o\y\q\v\v\s\2\w\r\f\e\2\b\e\r\t\3\t\6\k\g\6\p\0\v\y\t\d\g\m\a\w\7\h\a\u\g\o\1\0\b\m\9\1\b\q\s\j\x\k\s\1\h\y\y\c\d\i\b\z\j\9\2\m\x\b\1\s\1\g\7\8 ]] 00:06:11.856 00:06:11.856 real 0m1.664s 00:06:11.856 user 0m0.927s 00:06:11.856 sys 0m0.546s 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.856 ************************************ 00:06:11.856 END TEST dd_flag_nofollow 00:06:11.856 ************************************ 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.856 ************************************ 00:06:11.856 START TEST dd_flag_noatime 00:06:11.856 ************************************ 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1730714143 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1730714143 00:06:11.856 09:55:43 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:12.790 09:55:44 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.048 [2024-11-04 09:55:44.977702] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:13.048 [2024-11-04 09:55:44.977809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60204 ] 00:06:13.048 [2024-11-04 09:55:45.118803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.048 [2024-11-04 09:55:45.166038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.307 [2024-11-04 09:55:45.222690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.307  [2024-11-04T09:55:45.477Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.307 00:06:13.307 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.307 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1730714143 )) 00:06:13.307 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.307 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1730714143 )) 00:06:13.307 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:13.565 [2024-11-04 09:55:45.489221] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:13.565 [2024-11-04 09:55:45.489344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60218 ] 00:06:13.565 [2024-11-04 09:55:45.630059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.565 [2024-11-04 09:55:45.674326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.565 [2024-11-04 09:55:45.726576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.825  [2024-11-04T09:55:45.995Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.825 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1730714145 )) 00:06:13.825 00:06:13.825 real 0m2.037s 00:06:13.825 user 0m0.528s 00:06:13.825 sys 0m0.553s 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:13.825 ************************************ 00:06:13.825 END TEST dd_flag_noatime 00:06:13.825 ************************************ 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:13.825 09:55:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:14.086 ************************************ 00:06:14.086 START TEST dd_flags_misc 00:06:14.086 ************************************ 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:14.086 09:55:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:14.086 09:55:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.086 09:55:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.086 [2024-11-04 09:55:46.059245] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:14.086 [2024-11-04 09:55:46.059351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60246 ] 00:06:14.086 [2024-11-04 09:55:46.207395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.344 [2024-11-04 09:55:46.264818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.344 [2024-11-04 09:55:46.317024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.344  [2024-11-04T09:55:46.771Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.601 00:06:14.601 09:55:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3wtornycfwd7rw1cs018ch5aneh7ibo9g2btfmx3lf3v97fy9thkyjyl940xt4hydy0ytt1v6uyi59tko2jz35tuiwko5y8tgznaceenq2p1fa9qbytwektlk7iausbie4pp3ibsm3gmp1movzwha2g6jxyixlg5x7h6ml5q85o7hw5icfxeshse2ltxg84eui29j3amvibx2wqra654t914vqqm8qu0m9lwppmh6i1ugtzooy1ht6fnt9p5mdllo4ae0rnbjsc5gftmhz5jejr8266oq2bilyyq80nujufef6u4f7ijopn8b258oscddqjw5c0ymcyuq25aw76nr9updfzqkhldocy76841iux4kzhytda6qx9bgnj3kqi5t1a3hj019bh37s5xssrlihg4oeovaszpsw714lezisje6a772ju0cwnqzr7or8g4c7zsvjodu7qwn5l3pp3cyy8uilncovk64gyu8na5yu3ha0cphd6zezgj3nhgxbn5 == \3\w\t\o\r\n\y\c\f\w\d\7\r\w\1\c\s\0\1\8\c\h\5\a\n\e\h\7\i\b\o\9\g\2\b\t\f\m\x\3\l\f\3\v\9\7\f\y\9\t\h\k\y\j\y\l\9\4\0\x\t\4\h\y\d\y\0\y\t\t\1\v\6\u\y\i\5\9\t\k\o\2\j\z\3\5\t\u\i\w\k\o\5\y\8\t\g\z\n\a\c\e\e\n\q\2\p\1\f\a\9\q\b\y\t\w\e\k\t\l\k\7\i\a\u\s\b\i\e\4\p\p\3\i\b\s\m\3\g\m\p\1\m\o\v\z\w\h\a\2\g\6\j\x\y\i\x\l\g\5\x\7\h\6\m\l\5\q\8\5\o\7\h\w\5\i\c\f\x\e\s\h\s\e\2\l\t\x\g\8\4\e\u\i\2\9\j\3\a\m\v\i\b\x\2\w\q\r\a\6\5\4\t\9\1\4\v\q\q\m\8\q\u\0\m\9\l\w\p\p\m\h\6\i\1\u\g\t\z\o\o\y\1\h\t\6\f\n\t\9\p\5\m\d\l\l\o\4\a\e\0\r\n\b\j\s\c\5\g\f\t\m\h\z\5\j\e\j\r\8\2\6\6\o\q\2\b\i\l\y\y\q\8\0\n\u\j\u\f\e\f\6\u\4\f\7\i\j\o\p\n\8\b\2\5\8\o\s\c\d\d\q\j\w\5\c\0\y\m\c\y\u\q\2\5\a\w\7\6\n\r\9\u\p\d\f\z\q\k\h\l\d\o\c\y\7\6\8\4\1\i\u\x\4\k\z\h\y\t\d\a\6\q\x\9\b\g\n\j\3\k\q\i\5\t\1\a\3\h\j\0\1\9\b\h\3\7\s\5\x\s\s\r\l\i\h\g\4\o\e\o\v\a\s\z\p\s\w\7\1\4\l\e\z\i\s\j\e\6\a\7\7\2\j\u\0\c\w\n\q\z\r\7\o\r\8\g\4\c\7\z\s\v\j\o\d\u\7\q\w\n\5\l\3\p\p\3\c\y\y\8\u\i\l\n\c\o\v\k\6\4\g\y\u\8\n\a\5\y\u\3\h\a\0\c\p\h\d\6\z\e\z\g\j\3\n\h\g\x\b\n\5 ]] 00:06:14.601 09:55:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.601 09:55:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:14.601 [2024-11-04 09:55:46.591880] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:14.601 [2024-11-04 09:55:46.591992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60256 ] 00:06:14.601 [2024-11-04 09:55:46.731890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.859 [2024-11-04 09:55:46.782465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.859 [2024-11-04 09:55:46.837004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.859  [2024-11-04T09:55:47.286Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.116 00:06:15.116 09:55:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3wtornycfwd7rw1cs018ch5aneh7ibo9g2btfmx3lf3v97fy9thkyjyl940xt4hydy0ytt1v6uyi59tko2jz35tuiwko5y8tgznaceenq2p1fa9qbytwektlk7iausbie4pp3ibsm3gmp1movzwha2g6jxyixlg5x7h6ml5q85o7hw5icfxeshse2ltxg84eui29j3amvibx2wqra654t914vqqm8qu0m9lwppmh6i1ugtzooy1ht6fnt9p5mdllo4ae0rnbjsc5gftmhz5jejr8266oq2bilyyq80nujufef6u4f7ijopn8b258oscddqjw5c0ymcyuq25aw76nr9updfzqkhldocy76841iux4kzhytda6qx9bgnj3kqi5t1a3hj019bh37s5xssrlihg4oeovaszpsw714lezisje6a772ju0cwnqzr7or8g4c7zsvjodu7qwn5l3pp3cyy8uilncovk64gyu8na5yu3ha0cphd6zezgj3nhgxbn5 == \3\w\t\o\r\n\y\c\f\w\d\7\r\w\1\c\s\0\1\8\c\h\5\a\n\e\h\7\i\b\o\9\g\2\b\t\f\m\x\3\l\f\3\v\9\7\f\y\9\t\h\k\y\j\y\l\9\4\0\x\t\4\h\y\d\y\0\y\t\t\1\v\6\u\y\i\5\9\t\k\o\2\j\z\3\5\t\u\i\w\k\o\5\y\8\t\g\z\n\a\c\e\e\n\q\2\p\1\f\a\9\q\b\y\t\w\e\k\t\l\k\7\i\a\u\s\b\i\e\4\p\p\3\i\b\s\m\3\g\m\p\1\m\o\v\z\w\h\a\2\g\6\j\x\y\i\x\l\g\5\x\7\h\6\m\l\5\q\8\5\o\7\h\w\5\i\c\f\x\e\s\h\s\e\2\l\t\x\g\8\4\e\u\i\2\9\j\3\a\m\v\i\b\x\2\w\q\r\a\6\5\4\t\9\1\4\v\q\q\m\8\q\u\0\m\9\l\w\p\p\m\h\6\i\1\u\g\t\z\o\o\y\1\h\t\6\f\n\t\9\p\5\m\d\l\l\o\4\a\e\0\r\n\b\j\s\c\5\g\f\t\m\h\z\5\j\e\j\r\8\2\6\6\o\q\2\b\i\l\y\y\q\8\0\n\u\j\u\f\e\f\6\u\4\f\7\i\j\o\p\n\8\b\2\5\8\o\s\c\d\d\q\j\w\5\c\0\y\m\c\y\u\q\2\5\a\w\7\6\n\r\9\u\p\d\f\z\q\k\h\l\d\o\c\y\7\6\8\4\1\i\u\x\4\k\z\h\y\t\d\a\6\q\x\9\b\g\n\j\3\k\q\i\5\t\1\a\3\h\j\0\1\9\b\h\3\7\s\5\x\s\s\r\l\i\h\g\4\o\e\o\v\a\s\z\p\s\w\7\1\4\l\e\z\i\s\j\e\6\a\7\7\2\j\u\0\c\w\n\q\z\r\7\o\r\8\g\4\c\7\z\s\v\j\o\d\u\7\q\w\n\5\l\3\p\p\3\c\y\y\8\u\i\l\n\c\o\v\k\6\4\g\y\u\8\n\a\5\y\u\3\h\a\0\c\p\h\d\6\z\e\z\g\j\3\n\h\g\x\b\n\5 ]] 00:06:15.116 09:55:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.116 09:55:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:15.116 [2024-11-04 09:55:47.123025] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:15.117 [2024-11-04 09:55:47.123172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:06:15.117 [2024-11-04 09:55:47.270275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.375 [2024-11-04 09:55:47.328111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.375 [2024-11-04 09:55:47.381720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.375  [2024-11-04T09:55:47.803Z] Copying: 512/512 [B] (average 166 kBps) 00:06:15.633 00:06:15.633 09:55:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3wtornycfwd7rw1cs018ch5aneh7ibo9g2btfmx3lf3v97fy9thkyjyl940xt4hydy0ytt1v6uyi59tko2jz35tuiwko5y8tgznaceenq2p1fa9qbytwektlk7iausbie4pp3ibsm3gmp1movzwha2g6jxyixlg5x7h6ml5q85o7hw5icfxeshse2ltxg84eui29j3amvibx2wqra654t914vqqm8qu0m9lwppmh6i1ugtzooy1ht6fnt9p5mdllo4ae0rnbjsc5gftmhz5jejr8266oq2bilyyq80nujufef6u4f7ijopn8b258oscddqjw5c0ymcyuq25aw76nr9updfzqkhldocy76841iux4kzhytda6qx9bgnj3kqi5t1a3hj019bh37s5xssrlihg4oeovaszpsw714lezisje6a772ju0cwnqzr7or8g4c7zsvjodu7qwn5l3pp3cyy8uilncovk64gyu8na5yu3ha0cphd6zezgj3nhgxbn5 == \3\w\t\o\r\n\y\c\f\w\d\7\r\w\1\c\s\0\1\8\c\h\5\a\n\e\h\7\i\b\o\9\g\2\b\t\f\m\x\3\l\f\3\v\9\7\f\y\9\t\h\k\y\j\y\l\9\4\0\x\t\4\h\y\d\y\0\y\t\t\1\v\6\u\y\i\5\9\t\k\o\2\j\z\3\5\t\u\i\w\k\o\5\y\8\t\g\z\n\a\c\e\e\n\q\2\p\1\f\a\9\q\b\y\t\w\e\k\t\l\k\7\i\a\u\s\b\i\e\4\p\p\3\i\b\s\m\3\g\m\p\1\m\o\v\z\w\h\a\2\g\6\j\x\y\i\x\l\g\5\x\7\h\6\m\l\5\q\8\5\o\7\h\w\5\i\c\f\x\e\s\h\s\e\2\l\t\x\g\8\4\e\u\i\2\9\j\3\a\m\v\i\b\x\2\w\q\r\a\6\5\4\t\9\1\4\v\q\q\m\8\q\u\0\m\9\l\w\p\p\m\h\6\i\1\u\g\t\z\o\o\y\1\h\t\6\f\n\t\9\p\5\m\d\l\l\o\4\a\e\0\r\n\b\j\s\c\5\g\f\t\m\h\z\5\j\e\j\r\8\2\6\6\o\q\2\b\i\l\y\y\q\8\0\n\u\j\u\f\e\f\6\u\4\f\7\i\j\o\p\n\8\b\2\5\8\o\s\c\d\d\q\j\w\5\c\0\y\m\c\y\u\q\2\5\a\w\7\6\n\r\9\u\p\d\f\z\q\k\h\l\d\o\c\y\7\6\8\4\1\i\u\x\4\k\z\h\y\t\d\a\6\q\x\9\b\g\n\j\3\k\q\i\5\t\1\a\3\h\j\0\1\9\b\h\3\7\s\5\x\s\s\r\l\i\h\g\4\o\e\o\v\a\s\z\p\s\w\7\1\4\l\e\z\i\s\j\e\6\a\7\7\2\j\u\0\c\w\n\q\z\r\7\o\r\8\g\4\c\7\z\s\v\j\o\d\u\7\q\w\n\5\l\3\p\p\3\c\y\y\8\u\i\l\n\c\o\v\k\6\4\g\y\u\8\n\a\5\y\u\3\h\a\0\c\p\h\d\6\z\e\z\g\j\3\n\h\g\x\b\n\5 ]] 00:06:15.633 09:55:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.633 09:55:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:15.633 [2024-11-04 09:55:47.638867] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:15.633 [2024-11-04 09:55:47.638949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60276 ] 00:06:15.633 [2024-11-04 09:55:47.777694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.891 [2024-11-04 09:55:47.835430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.891 [2024-11-04 09:55:47.888005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.891  [2024-11-04T09:55:48.319Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.149 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3wtornycfwd7rw1cs018ch5aneh7ibo9g2btfmx3lf3v97fy9thkyjyl940xt4hydy0ytt1v6uyi59tko2jz35tuiwko5y8tgznaceenq2p1fa9qbytwektlk7iausbie4pp3ibsm3gmp1movzwha2g6jxyixlg5x7h6ml5q85o7hw5icfxeshse2ltxg84eui29j3amvibx2wqra654t914vqqm8qu0m9lwppmh6i1ugtzooy1ht6fnt9p5mdllo4ae0rnbjsc5gftmhz5jejr8266oq2bilyyq80nujufef6u4f7ijopn8b258oscddqjw5c0ymcyuq25aw76nr9updfzqkhldocy76841iux4kzhytda6qx9bgnj3kqi5t1a3hj019bh37s5xssrlihg4oeovaszpsw714lezisje6a772ju0cwnqzr7or8g4c7zsvjodu7qwn5l3pp3cyy8uilncovk64gyu8na5yu3ha0cphd6zezgj3nhgxbn5 == \3\w\t\o\r\n\y\c\f\w\d\7\r\w\1\c\s\0\1\8\c\h\5\a\n\e\h\7\i\b\o\9\g\2\b\t\f\m\x\3\l\f\3\v\9\7\f\y\9\t\h\k\y\j\y\l\9\4\0\x\t\4\h\y\d\y\0\y\t\t\1\v\6\u\y\i\5\9\t\k\o\2\j\z\3\5\t\u\i\w\k\o\5\y\8\t\g\z\n\a\c\e\e\n\q\2\p\1\f\a\9\q\b\y\t\w\e\k\t\l\k\7\i\a\u\s\b\i\e\4\p\p\3\i\b\s\m\3\g\m\p\1\m\o\v\z\w\h\a\2\g\6\j\x\y\i\x\l\g\5\x\7\h\6\m\l\5\q\8\5\o\7\h\w\5\i\c\f\x\e\s\h\s\e\2\l\t\x\g\8\4\e\u\i\2\9\j\3\a\m\v\i\b\x\2\w\q\r\a\6\5\4\t\9\1\4\v\q\q\m\8\q\u\0\m\9\l\w\p\p\m\h\6\i\1\u\g\t\z\o\o\y\1\h\t\6\f\n\t\9\p\5\m\d\l\l\o\4\a\e\0\r\n\b\j\s\c\5\g\f\t\m\h\z\5\j\e\j\r\8\2\6\6\o\q\2\b\i\l\y\y\q\8\0\n\u\j\u\f\e\f\6\u\4\f\7\i\j\o\p\n\8\b\2\5\8\o\s\c\d\d\q\j\w\5\c\0\y\m\c\y\u\q\2\5\a\w\7\6\n\r\9\u\p\d\f\z\q\k\h\l\d\o\c\y\7\6\8\4\1\i\u\x\4\k\z\h\y\t\d\a\6\q\x\9\b\g\n\j\3\k\q\i\5\t\1\a\3\h\j\0\1\9\b\h\3\7\s\5\x\s\s\r\l\i\h\g\4\o\e\o\v\a\s\z\p\s\w\7\1\4\l\e\z\i\s\j\e\6\a\7\7\2\j\u\0\c\w\n\q\z\r\7\o\r\8\g\4\c\7\z\s\v\j\o\d\u\7\q\w\n\5\l\3\p\p\3\c\y\y\8\u\i\l\n\c\o\v\k\6\4\g\y\u\8\n\a\5\y\u\3\h\a\0\c\p\h\d\6\z\e\z\g\j\3\n\h\g\x\b\n\5 ]] 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.149 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:16.149 [2024-11-04 09:55:48.175007] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:16.149 [2024-11-04 09:55:48.175097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60284 ] 00:06:16.149 [2024-11-04 09:55:48.315692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.406 [2024-11-04 09:55:48.375392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.406 [2024-11-04 09:55:48.431270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.406  [2024-11-04T09:55:48.834Z] Copying: 512/512 [B] (average 500 kBps) 00:06:16.664 00:06:16.664 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o950vp442p5qg26o4u569zbvoz02gsgqqhbipmg4w3z8mqw8fud8wqww2a2769bainnd2g3s2fk0uamfcud52aoqxma41goliwwhy2an2n09dpjwyqzc1bqr1umqniqdtdp9ag61giiy67k5qryi94h9tmdprqwyummzqlo3fu557rl1213bxenekar7lfrjjetsyxev24yrr2geualu9iihs8d5vlmvc8a6ofju2863pvgq07tgysvycpguaw215178c8yl3p58l5vu7gh8m4nvjl6cctn3yuqsveitro02lnr02ek10o6w43aquqs5uwjgdqaxwowewka9wa3a38fhnj2v8ddlwu6kt7iov8ld1w6lvfftakmfsc06wtc8asspvyx6zv68xguuhsgqz5zb0ddwn673zijfsa7a7up2vxzdumec6sw74cex88ocydwgk9pu7hdhv6fvafqbmscfch1uwdbszgt09h5nmlpx7msl0nvlkcaekme120vz == \o\9\5\0\v\p\4\4\2\p\5\q\g\2\6\o\4\u\5\6\9\z\b\v\o\z\0\2\g\s\g\q\q\h\b\i\p\m\g\4\w\3\z\8\m\q\w\8\f\u\d\8\w\q\w\w\2\a\2\7\6\9\b\a\i\n\n\d\2\g\3\s\2\f\k\0\u\a\m\f\c\u\d\5\2\a\o\q\x\m\a\4\1\g\o\l\i\w\w\h\y\2\a\n\2\n\0\9\d\p\j\w\y\q\z\c\1\b\q\r\1\u\m\q\n\i\q\d\t\d\p\9\a\g\6\1\g\i\i\y\6\7\k\5\q\r\y\i\9\4\h\9\t\m\d\p\r\q\w\y\u\m\m\z\q\l\o\3\f\u\5\5\7\r\l\1\2\1\3\b\x\e\n\e\k\a\r\7\l\f\r\j\j\e\t\s\y\x\e\v\2\4\y\r\r\2\g\e\u\a\l\u\9\i\i\h\s\8\d\5\v\l\m\v\c\8\a\6\o\f\j\u\2\8\6\3\p\v\g\q\0\7\t\g\y\s\v\y\c\p\g\u\a\w\2\1\5\1\7\8\c\8\y\l\3\p\5\8\l\5\v\u\7\g\h\8\m\4\n\v\j\l\6\c\c\t\n\3\y\u\q\s\v\e\i\t\r\o\0\2\l\n\r\0\2\e\k\1\0\o\6\w\4\3\a\q\u\q\s\5\u\w\j\g\d\q\a\x\w\o\w\e\w\k\a\9\w\a\3\a\3\8\f\h\n\j\2\v\8\d\d\l\w\u\6\k\t\7\i\o\v\8\l\d\1\w\6\l\v\f\f\t\a\k\m\f\s\c\0\6\w\t\c\8\a\s\s\p\v\y\x\6\z\v\6\8\x\g\u\u\h\s\g\q\z\5\z\b\0\d\d\w\n\6\7\3\z\i\j\f\s\a\7\a\7\u\p\2\v\x\z\d\u\m\e\c\6\s\w\7\4\c\e\x\8\8\o\c\y\d\w\g\k\9\p\u\7\h\d\h\v\6\f\v\a\f\q\b\m\s\c\f\c\h\1\u\w\d\b\s\z\g\t\0\9\h\5\n\m\l\p\x\7\m\s\l\0\n\v\l\k\c\a\e\k\m\e\1\2\0\v\z ]] 00:06:16.664 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:16.664 09:55:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:16.664 [2024-11-04 09:55:48.698674] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:16.664 [2024-11-04 09:55:48.698780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60300 ] 00:06:16.921 [2024-11-04 09:55:48.841657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.921 [2024-11-04 09:55:48.902760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.921 [2024-11-04 09:55:48.955621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.921  [2024-11-04T09:55:49.349Z] Copying: 512/512 [B] (average 500 kBps) 00:06:17.179 00:06:17.179 09:55:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o950vp442p5qg26o4u569zbvoz02gsgqqhbipmg4w3z8mqw8fud8wqww2a2769bainnd2g3s2fk0uamfcud52aoqxma41goliwwhy2an2n09dpjwyqzc1bqr1umqniqdtdp9ag61giiy67k5qryi94h9tmdprqwyummzqlo3fu557rl1213bxenekar7lfrjjetsyxev24yrr2geualu9iihs8d5vlmvc8a6ofju2863pvgq07tgysvycpguaw215178c8yl3p58l5vu7gh8m4nvjl6cctn3yuqsveitro02lnr02ek10o6w43aquqs5uwjgdqaxwowewka9wa3a38fhnj2v8ddlwu6kt7iov8ld1w6lvfftakmfsc06wtc8asspvyx6zv68xguuhsgqz5zb0ddwn673zijfsa7a7up2vxzdumec6sw74cex88ocydwgk9pu7hdhv6fvafqbmscfch1uwdbszgt09h5nmlpx7msl0nvlkcaekme120vz == \o\9\5\0\v\p\4\4\2\p\5\q\g\2\6\o\4\u\5\6\9\z\b\v\o\z\0\2\g\s\g\q\q\h\b\i\p\m\g\4\w\3\z\8\m\q\w\8\f\u\d\8\w\q\w\w\2\a\2\7\6\9\b\a\i\n\n\d\2\g\3\s\2\f\k\0\u\a\m\f\c\u\d\5\2\a\o\q\x\m\a\4\1\g\o\l\i\w\w\h\y\2\a\n\2\n\0\9\d\p\j\w\y\q\z\c\1\b\q\r\1\u\m\q\n\i\q\d\t\d\p\9\a\g\6\1\g\i\i\y\6\7\k\5\q\r\y\i\9\4\h\9\t\m\d\p\r\q\w\y\u\m\m\z\q\l\o\3\f\u\5\5\7\r\l\1\2\1\3\b\x\e\n\e\k\a\r\7\l\f\r\j\j\e\t\s\y\x\e\v\2\4\y\r\r\2\g\e\u\a\l\u\9\i\i\h\s\8\d\5\v\l\m\v\c\8\a\6\o\f\j\u\2\8\6\3\p\v\g\q\0\7\t\g\y\s\v\y\c\p\g\u\a\w\2\1\5\1\7\8\c\8\y\l\3\p\5\8\l\5\v\u\7\g\h\8\m\4\n\v\j\l\6\c\c\t\n\3\y\u\q\s\v\e\i\t\r\o\0\2\l\n\r\0\2\e\k\1\0\o\6\w\4\3\a\q\u\q\s\5\u\w\j\g\d\q\a\x\w\o\w\e\w\k\a\9\w\a\3\a\3\8\f\h\n\j\2\v\8\d\d\l\w\u\6\k\t\7\i\o\v\8\l\d\1\w\6\l\v\f\f\t\a\k\m\f\s\c\0\6\w\t\c\8\a\s\s\p\v\y\x\6\z\v\6\8\x\g\u\u\h\s\g\q\z\5\z\b\0\d\d\w\n\6\7\3\z\i\j\f\s\a\7\a\7\u\p\2\v\x\z\d\u\m\e\c\6\s\w\7\4\c\e\x\8\8\o\c\y\d\w\g\k\9\p\u\7\h\d\h\v\6\f\v\a\f\q\b\m\s\c\f\c\h\1\u\w\d\b\s\z\g\t\0\9\h\5\n\m\l\p\x\7\m\s\l\0\n\v\l\k\c\a\e\k\m\e\1\2\0\v\z ]] 00:06:17.179 09:55:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.179 09:55:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:17.179 [2024-11-04 09:55:49.241814] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:17.179 [2024-11-04 09:55:49.241940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60305 ] 00:06:17.437 [2024-11-04 09:55:49.390773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.437 [2024-11-04 09:55:49.451801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.437 [2024-11-04 09:55:49.506632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.437  [2024-11-04T09:55:49.864Z] Copying: 512/512 [B] (average 166 kBps) 00:06:17.694 00:06:17.694 09:55:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o950vp442p5qg26o4u569zbvoz02gsgqqhbipmg4w3z8mqw8fud8wqww2a2769bainnd2g3s2fk0uamfcud52aoqxma41goliwwhy2an2n09dpjwyqzc1bqr1umqniqdtdp9ag61giiy67k5qryi94h9tmdprqwyummzqlo3fu557rl1213bxenekar7lfrjjetsyxev24yrr2geualu9iihs8d5vlmvc8a6ofju2863pvgq07tgysvycpguaw215178c8yl3p58l5vu7gh8m4nvjl6cctn3yuqsveitro02lnr02ek10o6w43aquqs5uwjgdqaxwowewka9wa3a38fhnj2v8ddlwu6kt7iov8ld1w6lvfftakmfsc06wtc8asspvyx6zv68xguuhsgqz5zb0ddwn673zijfsa7a7up2vxzdumec6sw74cex88ocydwgk9pu7hdhv6fvafqbmscfch1uwdbszgt09h5nmlpx7msl0nvlkcaekme120vz == \o\9\5\0\v\p\4\4\2\p\5\q\g\2\6\o\4\u\5\6\9\z\b\v\o\z\0\2\g\s\g\q\q\h\b\i\p\m\g\4\w\3\z\8\m\q\w\8\f\u\d\8\w\q\w\w\2\a\2\7\6\9\b\a\i\n\n\d\2\g\3\s\2\f\k\0\u\a\m\f\c\u\d\5\2\a\o\q\x\m\a\4\1\g\o\l\i\w\w\h\y\2\a\n\2\n\0\9\d\p\j\w\y\q\z\c\1\b\q\r\1\u\m\q\n\i\q\d\t\d\p\9\a\g\6\1\g\i\i\y\6\7\k\5\q\r\y\i\9\4\h\9\t\m\d\p\r\q\w\y\u\m\m\z\q\l\o\3\f\u\5\5\7\r\l\1\2\1\3\b\x\e\n\e\k\a\r\7\l\f\r\j\j\e\t\s\y\x\e\v\2\4\y\r\r\2\g\e\u\a\l\u\9\i\i\h\s\8\d\5\v\l\m\v\c\8\a\6\o\f\j\u\2\8\6\3\p\v\g\q\0\7\t\g\y\s\v\y\c\p\g\u\a\w\2\1\5\1\7\8\c\8\y\l\3\p\5\8\l\5\v\u\7\g\h\8\m\4\n\v\j\l\6\c\c\t\n\3\y\u\q\s\v\e\i\t\r\o\0\2\l\n\r\0\2\e\k\1\0\o\6\w\4\3\a\q\u\q\s\5\u\w\j\g\d\q\a\x\w\o\w\e\w\k\a\9\w\a\3\a\3\8\f\h\n\j\2\v\8\d\d\l\w\u\6\k\t\7\i\o\v\8\l\d\1\w\6\l\v\f\f\t\a\k\m\f\s\c\0\6\w\t\c\8\a\s\s\p\v\y\x\6\z\v\6\8\x\g\u\u\h\s\g\q\z\5\z\b\0\d\d\w\n\6\7\3\z\i\j\f\s\a\7\a\7\u\p\2\v\x\z\d\u\m\e\c\6\s\w\7\4\c\e\x\8\8\o\c\y\d\w\g\k\9\p\u\7\h\d\h\v\6\f\v\a\f\q\b\m\s\c\f\c\h\1\u\w\d\b\s\z\g\t\0\9\h\5\n\m\l\p\x\7\m\s\l\0\n\v\l\k\c\a\e\k\m\e\1\2\0\v\z ]] 00:06:17.694 09:55:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:17.694 09:55:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:17.695 [2024-11-04 09:55:49.765583] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:17.695 [2024-11-04 09:55:49.765715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60319 ] 00:06:17.952 [2024-11-04 09:55:49.906030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.952 [2024-11-04 09:55:49.963264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.952 [2024-11-04 09:55:50.016294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.952  [2024-11-04T09:55:50.380Z] Copying: 512/512 [B] (average 166 kBps) 00:06:18.210 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ o950vp442p5qg26o4u569zbvoz02gsgqqhbipmg4w3z8mqw8fud8wqww2a2769bainnd2g3s2fk0uamfcud52aoqxma41goliwwhy2an2n09dpjwyqzc1bqr1umqniqdtdp9ag61giiy67k5qryi94h9tmdprqwyummzqlo3fu557rl1213bxenekar7lfrjjetsyxev24yrr2geualu9iihs8d5vlmvc8a6ofju2863pvgq07tgysvycpguaw215178c8yl3p58l5vu7gh8m4nvjl6cctn3yuqsveitro02lnr02ek10o6w43aquqs5uwjgdqaxwowewka9wa3a38fhnj2v8ddlwu6kt7iov8ld1w6lvfftakmfsc06wtc8asspvyx6zv68xguuhsgqz5zb0ddwn673zijfsa7a7up2vxzdumec6sw74cex88ocydwgk9pu7hdhv6fvafqbmscfch1uwdbszgt09h5nmlpx7msl0nvlkcaekme120vz == \o\9\5\0\v\p\4\4\2\p\5\q\g\2\6\o\4\u\5\6\9\z\b\v\o\z\0\2\g\s\g\q\q\h\b\i\p\m\g\4\w\3\z\8\m\q\w\8\f\u\d\8\w\q\w\w\2\a\2\7\6\9\b\a\i\n\n\d\2\g\3\s\2\f\k\0\u\a\m\f\c\u\d\5\2\a\o\q\x\m\a\4\1\g\o\l\i\w\w\h\y\2\a\n\2\n\0\9\d\p\j\w\y\q\z\c\1\b\q\r\1\u\m\q\n\i\q\d\t\d\p\9\a\g\6\1\g\i\i\y\6\7\k\5\q\r\y\i\9\4\h\9\t\m\d\p\r\q\w\y\u\m\m\z\q\l\o\3\f\u\5\5\7\r\l\1\2\1\3\b\x\e\n\e\k\a\r\7\l\f\r\j\j\e\t\s\y\x\e\v\2\4\y\r\r\2\g\e\u\a\l\u\9\i\i\h\s\8\d\5\v\l\m\v\c\8\a\6\o\f\j\u\2\8\6\3\p\v\g\q\0\7\t\g\y\s\v\y\c\p\g\u\a\w\2\1\5\1\7\8\c\8\y\l\3\p\5\8\l\5\v\u\7\g\h\8\m\4\n\v\j\l\6\c\c\t\n\3\y\u\q\s\v\e\i\t\r\o\0\2\l\n\r\0\2\e\k\1\0\o\6\w\4\3\a\q\u\q\s\5\u\w\j\g\d\q\a\x\w\o\w\e\w\k\a\9\w\a\3\a\3\8\f\h\n\j\2\v\8\d\d\l\w\u\6\k\t\7\i\o\v\8\l\d\1\w\6\l\v\f\f\t\a\k\m\f\s\c\0\6\w\t\c\8\a\s\s\p\v\y\x\6\z\v\6\8\x\g\u\u\h\s\g\q\z\5\z\b\0\d\d\w\n\6\7\3\z\i\j\f\s\a\7\a\7\u\p\2\v\x\z\d\u\m\e\c\6\s\w\7\4\c\e\x\8\8\o\c\y\d\w\g\k\9\p\u\7\h\d\h\v\6\f\v\a\f\q\b\m\s\c\f\c\h\1\u\w\d\b\s\z\g\t\0\9\h\5\n\m\l\p\x\7\m\s\l\0\n\v\l\k\c\a\e\k\m\e\1\2\0\v\z ]] 00:06:18.210 00:06:18.210 real 0m4.247s 00:06:18.210 user 0m2.328s 00:06:18.210 sys 0m2.093s 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:18.210 ************************************ 00:06:18.210 END TEST dd_flags_misc 00:06:18.210 ************************************ 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:18.210 * Second test run, disabling liburing, forcing AIO 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.210 ************************************ 00:06:18.210 START TEST dd_flag_append_forced_aio 00:06:18.210 ************************************ 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=wb6ufl83qut74zzl9jwgpmp7jf6wkpua 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=3t1vqbb72cjelad4ljf7e8ov17s37qp0 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s wb6ufl83qut74zzl9jwgpmp7jf6wkpua 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 3t1vqbb72cjelad4ljf7e8ov17s37qp0 00:06:18.210 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:18.210 [2024-11-04 09:55:50.356871] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:18.210 [2024-11-04 09:55:50.356990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:06:18.479 [2024-11-04 09:55:50.506950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.479 [2024-11-04 09:55:50.587441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.765 [2024-11-04 09:55:50.650832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.765  [2024-11-04T09:55:50.935Z] Copying: 32/32 [B] (average 31 kBps) 00:06:18.765 00:06:18.765 ************************************ 00:06:18.765 END TEST dd_flag_append_forced_aio 00:06:18.765 ************************************ 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 3t1vqbb72cjelad4ljf7e8ov17s37qp0wb6ufl83qut74zzl9jwgpmp7jf6wkpua == \3\t\1\v\q\b\b\7\2\c\j\e\l\a\d\4\l\j\f\7\e\8\o\v\1\7\s\3\7\q\p\0\w\b\6\u\f\l\8\3\q\u\t\7\4\z\z\l\9\j\w\g\p\m\p\7\j\f\6\w\k\p\u\a ]] 00:06:18.765 00:06:18.765 real 0m0.589s 00:06:18.765 user 0m0.315s 00:06:18.765 sys 0m0.154s 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:18.765 ************************************ 00:06:18.765 START TEST dd_flag_directory_forced_aio 00:06:18.765 ************************************ 00:06:18.765 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.023 09:55:50 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:19.023 [2024-11-04 09:55:50.986352] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:19.023 [2024-11-04 09:55:50.986446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:06:19.023 [2024-11-04 09:55:51.126011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.023 [2024-11-04 09:55:51.188425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.281 [2024-11-04 09:55:51.240613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.281 [2024-11-04 09:55:51.274924] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.281 [2024-11-04 09:55:51.274980] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.281 [2024-11-04 09:55:51.275018] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.281 [2024-11-04 09:55:51.390816] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:19.540 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:19.540 [2024-11-04 09:55:51.518421] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:19.540 [2024-11-04 09:55:51.518510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:06:19.540 [2024-11-04 09:55:51.662879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.798 [2024-11-04 09:55:51.720596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.798 [2024-11-04 09:55:51.775161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.798 [2024-11-04 09:55:51.809792] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.798 [2024-11-04 09:55:51.809840] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:19.798 [2024-11-04 09:55:51.809859] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.798 [2024-11-04 09:55:51.926644] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.056 00:06:20.056 real 0m1.067s 00:06:20.056 user 0m0.590s 00:06:20.056 sys 0m0.266s 00:06:20.056 09:55:51 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:20.056 ************************************ 00:06:20.056 END TEST dd_flag_directory_forced_aio 00:06:20.056 ************************************ 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:20.056 ************************************ 00:06:20.056 START TEST dd_flag_nofollow_forced_aio 00:06:20.056 ************************************ 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.056 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:20.057 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.057 [2024-11-04 09:55:52.116206] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:20.057 [2024-11-04 09:55:52.116469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60413 ] 00:06:20.314 [2024-11-04 09:55:52.257320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.314 [2024-11-04 09:55:52.320063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.314 [2024-11-04 09:55:52.374662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.314 [2024-11-04 09:55:52.409964] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:20.314 [2024-11-04 09:55:52.410019] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:20.314 [2024-11-04 09:55:52.410054] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:20.573 [2024-11-04 09:55:52.527423] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:20.573 09:55:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:20.573 [2024-11-04 09:55:52.665355] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:20.573 [2024-11-04 09:55:52.665464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60423 ] 00:06:20.832 [2024-11-04 09:55:52.811424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.832 [2024-11-04 09:55:52.874921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.832 [2024-11-04 09:55:52.929848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.832 [2024-11-04 09:55:52.965726] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:20.832 [2024-11-04 09:55:52.965785] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:20.832 [2024-11-04 09:55:52.965821] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.090 [2024-11-04 09:55:53.079767] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.090 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.090 [2024-11-04 09:55:53.225720] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:21.090 [2024-11-04 09:55:53.225844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:06:21.348 [2024-11-04 09:55:53.375109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.348 [2024-11-04 09:55:53.437121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.348 [2024-11-04 09:55:53.491982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.606  [2024-11-04T09:55:53.776Z] Copying: 512/512 [B] (average 500 kBps) 00:06:21.606 00:06:21.606 ************************************ 00:06:21.606 END TEST dd_flag_nofollow_forced_aio 00:06:21.606 ************************************ 00:06:21.607 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 0o8236twy3kj2f8wk8w8i71hvv3aqcjbea00b3gok3np90o7cody42jh2qudr6g8jlwotm2ww612unizlvsqarzdy9yk6zkfqo3munz1kbixyg1v183cj6z63b9day3r6kcnom7qzx61y2uuk3lcbojwobzuyd1yfbpna9h2s3q6roc428bsn06l0rylbil5b541kved0lmjw8broqponypliwfyo0q2navtafu4nqqp70am9c2sn7isc4tnspzz2efq4x0tc1ohft57wjzr7h8tk2hs2xefqmrvr5c90ewvklf8jgby8lxudl1a5vedm9ljueq2o2dhmzwfp3k6stwcmyaz7ydyt2x1b2ri59jyvaatkuzjji55voktr4qibnyqez3x7npys3oklbkvopddhennn6uvdhvch1bgh2mcjlzbtajfd3vn4kqxd4exbo6ucrohms9uuyicih44aaxlac13bh20yt173ho0154r879ph0llf9mre7wyfeei == \0\o\8\2\3\6\t\w\y\3\k\j\2\f\8\w\k\8\w\8\i\7\1\h\v\v\3\a\q\c\j\b\e\a\0\0\b\3\g\o\k\3\n\p\9\0\o\7\c\o\d\y\4\2\j\h\2\q\u\d\r\6\g\8\j\l\w\o\t\m\2\w\w\6\1\2\u\n\i\z\l\v\s\q\a\r\z\d\y\9\y\k\6\z\k\f\q\o\3\m\u\n\z\1\k\b\i\x\y\g\1\v\1\8\3\c\j\6\z\6\3\b\9\d\a\y\3\r\6\k\c\n\o\m\7\q\z\x\6\1\y\2\u\u\k\3\l\c\b\o\j\w\o\b\z\u\y\d\1\y\f\b\p\n\a\9\h\2\s\3\q\6\r\o\c\4\2\8\b\s\n\0\6\l\0\r\y\l\b\i\l\5\b\5\4\1\k\v\e\d\0\l\m\j\w\8\b\r\o\q\p\o\n\y\p\l\i\w\f\y\o\0\q\2\n\a\v\t\a\f\u\4\n\q\q\p\7\0\a\m\9\c\2\s\n\7\i\s\c\4\t\n\s\p\z\z\2\e\f\q\4\x\0\t\c\1\o\h\f\t\5\7\w\j\z\r\7\h\8\t\k\2\h\s\2\x\e\f\q\m\r\v\r\5\c\9\0\e\w\v\k\l\f\8\j\g\b\y\8\l\x\u\d\l\1\a\5\v\e\d\m\9\l\j\u\e\q\2\o\2\d\h\m\z\w\f\p\3\k\6\s\t\w\c\m\y\a\z\7\y\d\y\t\2\x\1\b\2\r\i\5\9\j\y\v\a\a\t\k\u\z\j\j\i\5\5\v\o\k\t\r\4\q\i\b\n\y\q\e\z\3\x\7\n\p\y\s\3\o\k\l\b\k\v\o\p\d\d\h\e\n\n\n\6\u\v\d\h\v\c\h\1\b\g\h\2\m\c\j\l\z\b\t\a\j\f\d\3\v\n\4\k\q\x\d\4\e\x\b\o\6\u\c\r\o\h\m\s\9\u\u\y\i\c\i\h\4\4\a\a\x\l\a\c\1\3\b\h\2\0\y\t\1\7\3\h\o\0\1\5\4\r\8\7\9\p\h\0\l\l\f\9\m\r\e\7\w\y\f\e\e\i ]] 00:06:21.607 00:06:21.607 real 0m1.683s 00:06:21.607 user 0m0.915s 00:06:21.607 sys 0m0.436s 00:06:21.607 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.607 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:21.865 ************************************ 00:06:21.865 START TEST dd_flag_noatime_forced_aio 00:06:21.865 ************************************ 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1730714153 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1730714153 00:06:21.865 09:55:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:22.799 09:55:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.799 [2024-11-04 09:55:54.870194] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:22.799 [2024-11-04 09:55:54.870285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60471 ] 00:06:23.071 [2024-11-04 09:55:55.011877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.071 [2024-11-04 09:55:55.068459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.071 [2024-11-04 09:55:55.123465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.071  [2024-11-04T09:55:55.513Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.343 00:06:23.343 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.343 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1730714153 )) 00:06:23.343 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.343 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1730714153 )) 00:06:23.343 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.343 [2024-11-04 09:55:55.431574] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:23.344 [2024-11-04 09:55:55.431715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60482 ] 00:06:23.602 [2024-11-04 09:55:55.580106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.602 [2024-11-04 09:55:55.639627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.602 [2024-11-04 09:55:55.695264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.602  [2024-11-04T09:55:56.031Z] Copying: 512/512 [B] (average 500 kBps) 00:06:23.861 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1730714155 )) 00:06:23.861 00:06:23.861 real 0m2.145s 00:06:23.861 user 0m0.609s 00:06:23.861 sys 0m0.297s 00:06:23.861 ************************************ 00:06:23.861 END TEST dd_flag_noatime_forced_aio 00:06:23.861 ************************************ 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:23.861 ************************************ 00:06:23.861 START TEST dd_flags_misc_forced_aio 00:06:23.861 ************************************ 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:23.861 09:55:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:24.120 [2024-11-04 09:55:56.053756] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:24.120 [2024-11-04 09:55:56.054037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:06:24.120 [2024-11-04 09:55:56.202593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.120 [2024-11-04 09:55:56.259387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.379 [2024-11-04 09:55:56.312916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.379  [2024-11-04T09:55:56.549Z] Copying: 512/512 [B] (average 500 kBps) 00:06:24.379 00:06:24.379 09:55:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0w4tshszry7gqf94hnqomzuvlxcf2od434dka1sfoak6cvv3igeoy2mz2f2g29422k9jggetb2upq75ir9a50r4hci1bv4fuk4tud7a3ph6s9wo1rlzv8zyiw2g0vssvk341i204hvkwqq31yup88dh4eeqloe3m24mmiwq62vr8xwggao16l0968pyad7eyny71ee4h9o8ei9axfa3bf41844ch9nitfpeheoatm7l238ctpbhlxc53he3rp2d1vszdw4fw0bia78ffgcqvy603akmkmf02ds081hpn4v4no7zccs3tui1ofrz05bqiy3jpxlcbgd87db2m4jnr2s8sl0wjxl1jspzr3m0v5wlg24346mgtwlxndwsf9oyeplhzvlz4rl9wuxqz2a7qke20zd2uc5q03i29okjd3d30y0okxcyy2ew7euwxp48idxgsfowv1xufm7fc23a9pyyfls21c6mo3uaggm70laako5iofnmnm1ssj6mw3n51 == \0\w\4\t\s\h\s\z\r\y\7\g\q\f\9\4\h\n\q\o\m\z\u\v\l\x\c\f\2\o\d\4\3\4\d\k\a\1\s\f\o\a\k\6\c\v\v\3\i\g\e\o\y\2\m\z\2\f\2\g\2\9\4\2\2\k\9\j\g\g\e\t\b\2\u\p\q\7\5\i\r\9\a\5\0\r\4\h\c\i\1\b\v\4\f\u\k\4\t\u\d\7\a\3\p\h\6\s\9\w\o\1\r\l\z\v\8\z\y\i\w\2\g\0\v\s\s\v\k\3\4\1\i\2\0\4\h\v\k\w\q\q\3\1\y\u\p\8\8\d\h\4\e\e\q\l\o\e\3\m\2\4\m\m\i\w\q\6\2\v\r\8\x\w\g\g\a\o\1\6\l\0\9\6\8\p\y\a\d\7\e\y\n\y\7\1\e\e\4\h\9\o\8\e\i\9\a\x\f\a\3\b\f\4\1\8\4\4\c\h\9\n\i\t\f\p\e\h\e\o\a\t\m\7\l\2\3\8\c\t\p\b\h\l\x\c\5\3\h\e\3\r\p\2\d\1\v\s\z\d\w\4\f\w\0\b\i\a\7\8\f\f\g\c\q\v\y\6\0\3\a\k\m\k\m\f\0\2\d\s\0\8\1\h\p\n\4\v\4\n\o\7\z\c\c\s\3\t\u\i\1\o\f\r\z\0\5\b\q\i\y\3\j\p\x\l\c\b\g\d\8\7\d\b\2\m\4\j\n\r\2\s\8\s\l\0\w\j\x\l\1\j\s\p\z\r\3\m\0\v\5\w\l\g\2\4\3\4\6\m\g\t\w\l\x\n\d\w\s\f\9\o\y\e\p\l\h\z\v\l\z\4\r\l\9\w\u\x\q\z\2\a\7\q\k\e\2\0\z\d\2\u\c\5\q\0\3\i\2\9\o\k\j\d\3\d\3\0\y\0\o\k\x\c\y\y\2\e\w\7\e\u\w\x\p\4\8\i\d\x\g\s\f\o\w\v\1\x\u\f\m\7\f\c\2\3\a\9\p\y\y\f\l\s\2\1\c\6\m\o\3\u\a\g\g\m\7\0\l\a\a\k\o\5\i\o\f\n\m\n\m\1\s\s\j\6\m\w\3\n\5\1 ]] 00:06:24.379 09:55:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:24.379 09:55:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:24.637 [2024-11-04 09:55:56.598815] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:24.638 [2024-11-04 09:55:56.598910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60516 ] 00:06:24.638 [2024-11-04 09:55:56.751748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.896 [2024-11-04 09:55:56.814030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.896 [2024-11-04 09:55:56.866438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.896  [2024-11-04T09:55:57.324Z] Copying: 512/512 [B] (average 500 kBps) 00:06:25.154 00:06:25.154 09:55:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0w4tshszry7gqf94hnqomzuvlxcf2od434dka1sfoak6cvv3igeoy2mz2f2g29422k9jggetb2upq75ir9a50r4hci1bv4fuk4tud7a3ph6s9wo1rlzv8zyiw2g0vssvk341i204hvkwqq31yup88dh4eeqloe3m24mmiwq62vr8xwggao16l0968pyad7eyny71ee4h9o8ei9axfa3bf41844ch9nitfpeheoatm7l238ctpbhlxc53he3rp2d1vszdw4fw0bia78ffgcqvy603akmkmf02ds081hpn4v4no7zccs3tui1ofrz05bqiy3jpxlcbgd87db2m4jnr2s8sl0wjxl1jspzr3m0v5wlg24346mgtwlxndwsf9oyeplhzvlz4rl9wuxqz2a7qke20zd2uc5q03i29okjd3d30y0okxcyy2ew7euwxp48idxgsfowv1xufm7fc23a9pyyfls21c6mo3uaggm70laako5iofnmnm1ssj6mw3n51 == \0\w\4\t\s\h\s\z\r\y\7\g\q\f\9\4\h\n\q\o\m\z\u\v\l\x\c\f\2\o\d\4\3\4\d\k\a\1\s\f\o\a\k\6\c\v\v\3\i\g\e\o\y\2\m\z\2\f\2\g\2\9\4\2\2\k\9\j\g\g\e\t\b\2\u\p\q\7\5\i\r\9\a\5\0\r\4\h\c\i\1\b\v\4\f\u\k\4\t\u\d\7\a\3\p\h\6\s\9\w\o\1\r\l\z\v\8\z\y\i\w\2\g\0\v\s\s\v\k\3\4\1\i\2\0\4\h\v\k\w\q\q\3\1\y\u\p\8\8\d\h\4\e\e\q\l\o\e\3\m\2\4\m\m\i\w\q\6\2\v\r\8\x\w\g\g\a\o\1\6\l\0\9\6\8\p\y\a\d\7\e\y\n\y\7\1\e\e\4\h\9\o\8\e\i\9\a\x\f\a\3\b\f\4\1\8\4\4\c\h\9\n\i\t\f\p\e\h\e\o\a\t\m\7\l\2\3\8\c\t\p\b\h\l\x\c\5\3\h\e\3\r\p\2\d\1\v\s\z\d\w\4\f\w\0\b\i\a\7\8\f\f\g\c\q\v\y\6\0\3\a\k\m\k\m\f\0\2\d\s\0\8\1\h\p\n\4\v\4\n\o\7\z\c\c\s\3\t\u\i\1\o\f\r\z\0\5\b\q\i\y\3\j\p\x\l\c\b\g\d\8\7\d\b\2\m\4\j\n\r\2\s\8\s\l\0\w\j\x\l\1\j\s\p\z\r\3\m\0\v\5\w\l\g\2\4\3\4\6\m\g\t\w\l\x\n\d\w\s\f\9\o\y\e\p\l\h\z\v\l\z\4\r\l\9\w\u\x\q\z\2\a\7\q\k\e\2\0\z\d\2\u\c\5\q\0\3\i\2\9\o\k\j\d\3\d\3\0\y\0\o\k\x\c\y\y\2\e\w\7\e\u\w\x\p\4\8\i\d\x\g\s\f\o\w\v\1\x\u\f\m\7\f\c\2\3\a\9\p\y\y\f\l\s\2\1\c\6\m\o\3\u\a\g\g\m\7\0\l\a\a\k\o\5\i\o\f\n\m\n\m\1\s\s\j\6\m\w\3\n\5\1 ]] 00:06:25.154 09:55:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.154 09:55:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:25.154 [2024-11-04 09:55:57.151190] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:25.154 [2024-11-04 09:55:57.151295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60529 ] 00:06:25.154 [2024-11-04 09:55:57.294435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.412 [2024-11-04 09:55:57.360484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.412 [2024-11-04 09:55:57.413663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.412  [2024-11-04T09:55:57.841Z] Copying: 512/512 [B] (average 125 kBps) 00:06:25.671 00:06:25.671 09:55:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0w4tshszry7gqf94hnqomzuvlxcf2od434dka1sfoak6cvv3igeoy2mz2f2g29422k9jggetb2upq75ir9a50r4hci1bv4fuk4tud7a3ph6s9wo1rlzv8zyiw2g0vssvk341i204hvkwqq31yup88dh4eeqloe3m24mmiwq62vr8xwggao16l0968pyad7eyny71ee4h9o8ei9axfa3bf41844ch9nitfpeheoatm7l238ctpbhlxc53he3rp2d1vszdw4fw0bia78ffgcqvy603akmkmf02ds081hpn4v4no7zccs3tui1ofrz05bqiy3jpxlcbgd87db2m4jnr2s8sl0wjxl1jspzr3m0v5wlg24346mgtwlxndwsf9oyeplhzvlz4rl9wuxqz2a7qke20zd2uc5q03i29okjd3d30y0okxcyy2ew7euwxp48idxgsfowv1xufm7fc23a9pyyfls21c6mo3uaggm70laako5iofnmnm1ssj6mw3n51 == \0\w\4\t\s\h\s\z\r\y\7\g\q\f\9\4\h\n\q\o\m\z\u\v\l\x\c\f\2\o\d\4\3\4\d\k\a\1\s\f\o\a\k\6\c\v\v\3\i\g\e\o\y\2\m\z\2\f\2\g\2\9\4\2\2\k\9\j\g\g\e\t\b\2\u\p\q\7\5\i\r\9\a\5\0\r\4\h\c\i\1\b\v\4\f\u\k\4\t\u\d\7\a\3\p\h\6\s\9\w\o\1\r\l\z\v\8\z\y\i\w\2\g\0\v\s\s\v\k\3\4\1\i\2\0\4\h\v\k\w\q\q\3\1\y\u\p\8\8\d\h\4\e\e\q\l\o\e\3\m\2\4\m\m\i\w\q\6\2\v\r\8\x\w\g\g\a\o\1\6\l\0\9\6\8\p\y\a\d\7\e\y\n\y\7\1\e\e\4\h\9\o\8\e\i\9\a\x\f\a\3\b\f\4\1\8\4\4\c\h\9\n\i\t\f\p\e\h\e\o\a\t\m\7\l\2\3\8\c\t\p\b\h\l\x\c\5\3\h\e\3\r\p\2\d\1\v\s\z\d\w\4\f\w\0\b\i\a\7\8\f\f\g\c\q\v\y\6\0\3\a\k\m\k\m\f\0\2\d\s\0\8\1\h\p\n\4\v\4\n\o\7\z\c\c\s\3\t\u\i\1\o\f\r\z\0\5\b\q\i\y\3\j\p\x\l\c\b\g\d\8\7\d\b\2\m\4\j\n\r\2\s\8\s\l\0\w\j\x\l\1\j\s\p\z\r\3\m\0\v\5\w\l\g\2\4\3\4\6\m\g\t\w\l\x\n\d\w\s\f\9\o\y\e\p\l\h\z\v\l\z\4\r\l\9\w\u\x\q\z\2\a\7\q\k\e\2\0\z\d\2\u\c\5\q\0\3\i\2\9\o\k\j\d\3\d\3\0\y\0\o\k\x\c\y\y\2\e\w\7\e\u\w\x\p\4\8\i\d\x\g\s\f\o\w\v\1\x\u\f\m\7\f\c\2\3\a\9\p\y\y\f\l\s\2\1\c\6\m\o\3\u\a\g\g\m\7\0\l\a\a\k\o\5\i\o\f\n\m\n\m\1\s\s\j\6\m\w\3\n\5\1 ]] 00:06:25.671 09:55:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:25.671 09:55:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:25.671 [2024-11-04 09:55:57.690977] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:25.671 [2024-11-04 09:55:57.691278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60537 ] 00:06:25.671 [2024-11-04 09:55:57.834090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.929 [2024-11-04 09:55:57.891999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.929 [2024-11-04 09:55:57.945576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.929  [2024-11-04T09:55:58.358Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.188 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0w4tshszry7gqf94hnqomzuvlxcf2od434dka1sfoak6cvv3igeoy2mz2f2g29422k9jggetb2upq75ir9a50r4hci1bv4fuk4tud7a3ph6s9wo1rlzv8zyiw2g0vssvk341i204hvkwqq31yup88dh4eeqloe3m24mmiwq62vr8xwggao16l0968pyad7eyny71ee4h9o8ei9axfa3bf41844ch9nitfpeheoatm7l238ctpbhlxc53he3rp2d1vszdw4fw0bia78ffgcqvy603akmkmf02ds081hpn4v4no7zccs3tui1ofrz05bqiy3jpxlcbgd87db2m4jnr2s8sl0wjxl1jspzr3m0v5wlg24346mgtwlxndwsf9oyeplhzvlz4rl9wuxqz2a7qke20zd2uc5q03i29okjd3d30y0okxcyy2ew7euwxp48idxgsfowv1xufm7fc23a9pyyfls21c6mo3uaggm70laako5iofnmnm1ssj6mw3n51 == \0\w\4\t\s\h\s\z\r\y\7\g\q\f\9\4\h\n\q\o\m\z\u\v\l\x\c\f\2\o\d\4\3\4\d\k\a\1\s\f\o\a\k\6\c\v\v\3\i\g\e\o\y\2\m\z\2\f\2\g\2\9\4\2\2\k\9\j\g\g\e\t\b\2\u\p\q\7\5\i\r\9\a\5\0\r\4\h\c\i\1\b\v\4\f\u\k\4\t\u\d\7\a\3\p\h\6\s\9\w\o\1\r\l\z\v\8\z\y\i\w\2\g\0\v\s\s\v\k\3\4\1\i\2\0\4\h\v\k\w\q\q\3\1\y\u\p\8\8\d\h\4\e\e\q\l\o\e\3\m\2\4\m\m\i\w\q\6\2\v\r\8\x\w\g\g\a\o\1\6\l\0\9\6\8\p\y\a\d\7\e\y\n\y\7\1\e\e\4\h\9\o\8\e\i\9\a\x\f\a\3\b\f\4\1\8\4\4\c\h\9\n\i\t\f\p\e\h\e\o\a\t\m\7\l\2\3\8\c\t\p\b\h\l\x\c\5\3\h\e\3\r\p\2\d\1\v\s\z\d\w\4\f\w\0\b\i\a\7\8\f\f\g\c\q\v\y\6\0\3\a\k\m\k\m\f\0\2\d\s\0\8\1\h\p\n\4\v\4\n\o\7\z\c\c\s\3\t\u\i\1\o\f\r\z\0\5\b\q\i\y\3\j\p\x\l\c\b\g\d\8\7\d\b\2\m\4\j\n\r\2\s\8\s\l\0\w\j\x\l\1\j\s\p\z\r\3\m\0\v\5\w\l\g\2\4\3\4\6\m\g\t\w\l\x\n\d\w\s\f\9\o\y\e\p\l\h\z\v\l\z\4\r\l\9\w\u\x\q\z\2\a\7\q\k\e\2\0\z\d\2\u\c\5\q\0\3\i\2\9\o\k\j\d\3\d\3\0\y\0\o\k\x\c\y\y\2\e\w\7\e\u\w\x\p\4\8\i\d\x\g\s\f\o\w\v\1\x\u\f\m\7\f\c\2\3\a\9\p\y\y\f\l\s\2\1\c\6\m\o\3\u\a\g\g\m\7\0\l\a\a\k\o\5\i\o\f\n\m\n\m\1\s\s\j\6\m\w\3\n\5\1 ]] 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.188 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:26.188 [2024-11-04 09:55:58.273432] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:26.188 [2024-11-04 09:55:58.273545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60544 ] 00:06:26.447 [2024-11-04 09:55:58.423551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.447 [2024-11-04 09:55:58.486876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.447 [2024-11-04 09:55:58.541442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.447  [2024-11-04T09:55:58.876Z] Copying: 512/512 [B] (average 500 kBps) 00:06:26.706 00:06:26.706 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h5i8v0yggzhtaa5che5qs27badj326l5am22bkal2au1kq1e4hbclcucmqkqi89stgh5h5gqa4vxrigmpi5ikbrqjkhqrsqw8e58u8jgi692jlcfmopb9e00q41db5fsuyjlvs4prbytvoasvbqokpw8w5vs6pfhve0m9am4xpmg6enjzy4pqfmcz1kvt4td43sjkw9p0jk0u52xpynn5ua0ln1ff71dbbiadlg752jqezczy9bgezlqgxd2n6h42blh0bvoaa9u3c51bhvk5vl1scm7fgow5ykrgppkrvkhuhhvc5apq6vrl6dw8lpflcw76my7px9h3yzfseqgm0ri7os2fqatsac4p7f3agni8chzruwclawxothmxps0du39q5nohf7or7ngqu4h784mzc4am29z7yred0pd0rd4r0oe09yyspsn01pbxcl9ayvj5y6iyk41bda15tuqrk8t8jbam7ez7svhooedhxddolmwsdkz0qajvdsyx3ye == \h\5\i\8\v\0\y\g\g\z\h\t\a\a\5\c\h\e\5\q\s\2\7\b\a\d\j\3\2\6\l\5\a\m\2\2\b\k\a\l\2\a\u\1\k\q\1\e\4\h\b\c\l\c\u\c\m\q\k\q\i\8\9\s\t\g\h\5\h\5\g\q\a\4\v\x\r\i\g\m\p\i\5\i\k\b\r\q\j\k\h\q\r\s\q\w\8\e\5\8\u\8\j\g\i\6\9\2\j\l\c\f\m\o\p\b\9\e\0\0\q\4\1\d\b\5\f\s\u\y\j\l\v\s\4\p\r\b\y\t\v\o\a\s\v\b\q\o\k\p\w\8\w\5\v\s\6\p\f\h\v\e\0\m\9\a\m\4\x\p\m\g\6\e\n\j\z\y\4\p\q\f\m\c\z\1\k\v\t\4\t\d\4\3\s\j\k\w\9\p\0\j\k\0\u\5\2\x\p\y\n\n\5\u\a\0\l\n\1\f\f\7\1\d\b\b\i\a\d\l\g\7\5\2\j\q\e\z\c\z\y\9\b\g\e\z\l\q\g\x\d\2\n\6\h\4\2\b\l\h\0\b\v\o\a\a\9\u\3\c\5\1\b\h\v\k\5\v\l\1\s\c\m\7\f\g\o\w\5\y\k\r\g\p\p\k\r\v\k\h\u\h\h\v\c\5\a\p\q\6\v\r\l\6\d\w\8\l\p\f\l\c\w\7\6\m\y\7\p\x\9\h\3\y\z\f\s\e\q\g\m\0\r\i\7\o\s\2\f\q\a\t\s\a\c\4\p\7\f\3\a\g\n\i\8\c\h\z\r\u\w\c\l\a\w\x\o\t\h\m\x\p\s\0\d\u\3\9\q\5\n\o\h\f\7\o\r\7\n\g\q\u\4\h\7\8\4\m\z\c\4\a\m\2\9\z\7\y\r\e\d\0\p\d\0\r\d\4\r\0\o\e\0\9\y\y\s\p\s\n\0\1\p\b\x\c\l\9\a\y\v\j\5\y\6\i\y\k\4\1\b\d\a\1\5\t\u\q\r\k\8\t\8\j\b\a\m\7\e\z\7\s\v\h\o\o\e\d\h\x\d\d\o\l\m\w\s\d\k\z\0\q\a\j\v\d\s\y\x\3\y\e ]] 00:06:26.706 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:26.706 09:55:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:26.706 [2024-11-04 09:55:58.826881] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:26.706 [2024-11-04 09:55:58.826985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60552 ] 00:06:26.966 [2024-11-04 09:55:58.973067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.966 [2024-11-04 09:55:59.037047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.966 [2024-11-04 09:55:59.089394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.966  [2024-11-04T09:55:59.415Z] Copying: 512/512 [B] (average 500 kBps) 00:06:27.245 00:06:27.246 09:55:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h5i8v0yggzhtaa5che5qs27badj326l5am22bkal2au1kq1e4hbclcucmqkqi89stgh5h5gqa4vxrigmpi5ikbrqjkhqrsqw8e58u8jgi692jlcfmopb9e00q41db5fsuyjlvs4prbytvoasvbqokpw8w5vs6pfhve0m9am4xpmg6enjzy4pqfmcz1kvt4td43sjkw9p0jk0u52xpynn5ua0ln1ff71dbbiadlg752jqezczy9bgezlqgxd2n6h42blh0bvoaa9u3c51bhvk5vl1scm7fgow5ykrgppkrvkhuhhvc5apq6vrl6dw8lpflcw76my7px9h3yzfseqgm0ri7os2fqatsac4p7f3agni8chzruwclawxothmxps0du39q5nohf7or7ngqu4h784mzc4am29z7yred0pd0rd4r0oe09yyspsn01pbxcl9ayvj5y6iyk41bda15tuqrk8t8jbam7ez7svhooedhxddolmwsdkz0qajvdsyx3ye == \h\5\i\8\v\0\y\g\g\z\h\t\a\a\5\c\h\e\5\q\s\2\7\b\a\d\j\3\2\6\l\5\a\m\2\2\b\k\a\l\2\a\u\1\k\q\1\e\4\h\b\c\l\c\u\c\m\q\k\q\i\8\9\s\t\g\h\5\h\5\g\q\a\4\v\x\r\i\g\m\p\i\5\i\k\b\r\q\j\k\h\q\r\s\q\w\8\e\5\8\u\8\j\g\i\6\9\2\j\l\c\f\m\o\p\b\9\e\0\0\q\4\1\d\b\5\f\s\u\y\j\l\v\s\4\p\r\b\y\t\v\o\a\s\v\b\q\o\k\p\w\8\w\5\v\s\6\p\f\h\v\e\0\m\9\a\m\4\x\p\m\g\6\e\n\j\z\y\4\p\q\f\m\c\z\1\k\v\t\4\t\d\4\3\s\j\k\w\9\p\0\j\k\0\u\5\2\x\p\y\n\n\5\u\a\0\l\n\1\f\f\7\1\d\b\b\i\a\d\l\g\7\5\2\j\q\e\z\c\z\y\9\b\g\e\z\l\q\g\x\d\2\n\6\h\4\2\b\l\h\0\b\v\o\a\a\9\u\3\c\5\1\b\h\v\k\5\v\l\1\s\c\m\7\f\g\o\w\5\y\k\r\g\p\p\k\r\v\k\h\u\h\h\v\c\5\a\p\q\6\v\r\l\6\d\w\8\l\p\f\l\c\w\7\6\m\y\7\p\x\9\h\3\y\z\f\s\e\q\g\m\0\r\i\7\o\s\2\f\q\a\t\s\a\c\4\p\7\f\3\a\g\n\i\8\c\h\z\r\u\w\c\l\a\w\x\o\t\h\m\x\p\s\0\d\u\3\9\q\5\n\o\h\f\7\o\r\7\n\g\q\u\4\h\7\8\4\m\z\c\4\a\m\2\9\z\7\y\r\e\d\0\p\d\0\r\d\4\r\0\o\e\0\9\y\y\s\p\s\n\0\1\p\b\x\c\l\9\a\y\v\j\5\y\6\i\y\k\4\1\b\d\a\1\5\t\u\q\r\k\8\t\8\j\b\a\m\7\e\z\7\s\v\h\o\o\e\d\h\x\d\d\o\l\m\w\s\d\k\z\0\q\a\j\v\d\s\y\x\3\y\e ]] 00:06:27.246 09:55:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.246 09:55:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:27.246 [2024-11-04 09:55:59.401003] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:27.246 [2024-11-04 09:55:59.401159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:06:27.505 [2024-11-04 09:55:59.552020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.505 [2024-11-04 09:55:59.614032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.505 [2024-11-04 09:55:59.667318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.763  [2024-11-04T09:55:59.933Z] Copying: 512/512 [B] (average 500 kBps) 00:06:27.763 00:06:27.763 09:55:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h5i8v0yggzhtaa5che5qs27badj326l5am22bkal2au1kq1e4hbclcucmqkqi89stgh5h5gqa4vxrigmpi5ikbrqjkhqrsqw8e58u8jgi692jlcfmopb9e00q41db5fsuyjlvs4prbytvoasvbqokpw8w5vs6pfhve0m9am4xpmg6enjzy4pqfmcz1kvt4td43sjkw9p0jk0u52xpynn5ua0ln1ff71dbbiadlg752jqezczy9bgezlqgxd2n6h42blh0bvoaa9u3c51bhvk5vl1scm7fgow5ykrgppkrvkhuhhvc5apq6vrl6dw8lpflcw76my7px9h3yzfseqgm0ri7os2fqatsac4p7f3agni8chzruwclawxothmxps0du39q5nohf7or7ngqu4h784mzc4am29z7yred0pd0rd4r0oe09yyspsn01pbxcl9ayvj5y6iyk41bda15tuqrk8t8jbam7ez7svhooedhxddolmwsdkz0qajvdsyx3ye == \h\5\i\8\v\0\y\g\g\z\h\t\a\a\5\c\h\e\5\q\s\2\7\b\a\d\j\3\2\6\l\5\a\m\2\2\b\k\a\l\2\a\u\1\k\q\1\e\4\h\b\c\l\c\u\c\m\q\k\q\i\8\9\s\t\g\h\5\h\5\g\q\a\4\v\x\r\i\g\m\p\i\5\i\k\b\r\q\j\k\h\q\r\s\q\w\8\e\5\8\u\8\j\g\i\6\9\2\j\l\c\f\m\o\p\b\9\e\0\0\q\4\1\d\b\5\f\s\u\y\j\l\v\s\4\p\r\b\y\t\v\o\a\s\v\b\q\o\k\p\w\8\w\5\v\s\6\p\f\h\v\e\0\m\9\a\m\4\x\p\m\g\6\e\n\j\z\y\4\p\q\f\m\c\z\1\k\v\t\4\t\d\4\3\s\j\k\w\9\p\0\j\k\0\u\5\2\x\p\y\n\n\5\u\a\0\l\n\1\f\f\7\1\d\b\b\i\a\d\l\g\7\5\2\j\q\e\z\c\z\y\9\b\g\e\z\l\q\g\x\d\2\n\6\h\4\2\b\l\h\0\b\v\o\a\a\9\u\3\c\5\1\b\h\v\k\5\v\l\1\s\c\m\7\f\g\o\w\5\y\k\r\g\p\p\k\r\v\k\h\u\h\h\v\c\5\a\p\q\6\v\r\l\6\d\w\8\l\p\f\l\c\w\7\6\m\y\7\p\x\9\h\3\y\z\f\s\e\q\g\m\0\r\i\7\o\s\2\f\q\a\t\s\a\c\4\p\7\f\3\a\g\n\i\8\c\h\z\r\u\w\c\l\a\w\x\o\t\h\m\x\p\s\0\d\u\3\9\q\5\n\o\h\f\7\o\r\7\n\g\q\u\4\h\7\8\4\m\z\c\4\a\m\2\9\z\7\y\r\e\d\0\p\d\0\r\d\4\r\0\o\e\0\9\y\y\s\p\s\n\0\1\p\b\x\c\l\9\a\y\v\j\5\y\6\i\y\k\4\1\b\d\a\1\5\t\u\q\r\k\8\t\8\j\b\a\m\7\e\z\7\s\v\h\o\o\e\d\h\x\d\d\o\l\m\w\s\d\k\z\0\q\a\j\v\d\s\y\x\3\y\e ]] 00:06:27.763 09:55:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:27.763 09:55:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:28.021 [2024-11-04 09:55:59.955135] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:28.021 [2024-11-04 09:55:59.955219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60568 ] 00:06:28.021 [2024-11-04 09:56:00.098816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.021 [2024-11-04 09:56:00.161857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.280 [2024-11-04 09:56:00.215189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.280  [2024-11-04T09:56:00.708Z] Copying: 512/512 [B] (average 250 kBps) 00:06:28.538 00:06:28.538 ************************************ 00:06:28.538 END TEST dd_flags_misc_forced_aio 00:06:28.538 ************************************ 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ h5i8v0yggzhtaa5che5qs27badj326l5am22bkal2au1kq1e4hbclcucmqkqi89stgh5h5gqa4vxrigmpi5ikbrqjkhqrsqw8e58u8jgi692jlcfmopb9e00q41db5fsuyjlvs4prbytvoasvbqokpw8w5vs6pfhve0m9am4xpmg6enjzy4pqfmcz1kvt4td43sjkw9p0jk0u52xpynn5ua0ln1ff71dbbiadlg752jqezczy9bgezlqgxd2n6h42blh0bvoaa9u3c51bhvk5vl1scm7fgow5ykrgppkrvkhuhhvc5apq6vrl6dw8lpflcw76my7px9h3yzfseqgm0ri7os2fqatsac4p7f3agni8chzruwclawxothmxps0du39q5nohf7or7ngqu4h784mzc4am29z7yred0pd0rd4r0oe09yyspsn01pbxcl9ayvj5y6iyk41bda15tuqrk8t8jbam7ez7svhooedhxddolmwsdkz0qajvdsyx3ye == \h\5\i\8\v\0\y\g\g\z\h\t\a\a\5\c\h\e\5\q\s\2\7\b\a\d\j\3\2\6\l\5\a\m\2\2\b\k\a\l\2\a\u\1\k\q\1\e\4\h\b\c\l\c\u\c\m\q\k\q\i\8\9\s\t\g\h\5\h\5\g\q\a\4\v\x\r\i\g\m\p\i\5\i\k\b\r\q\j\k\h\q\r\s\q\w\8\e\5\8\u\8\j\g\i\6\9\2\j\l\c\f\m\o\p\b\9\e\0\0\q\4\1\d\b\5\f\s\u\y\j\l\v\s\4\p\r\b\y\t\v\o\a\s\v\b\q\o\k\p\w\8\w\5\v\s\6\p\f\h\v\e\0\m\9\a\m\4\x\p\m\g\6\e\n\j\z\y\4\p\q\f\m\c\z\1\k\v\t\4\t\d\4\3\s\j\k\w\9\p\0\j\k\0\u\5\2\x\p\y\n\n\5\u\a\0\l\n\1\f\f\7\1\d\b\b\i\a\d\l\g\7\5\2\j\q\e\z\c\z\y\9\b\g\e\z\l\q\g\x\d\2\n\6\h\4\2\b\l\h\0\b\v\o\a\a\9\u\3\c\5\1\b\h\v\k\5\v\l\1\s\c\m\7\f\g\o\w\5\y\k\r\g\p\p\k\r\v\k\h\u\h\h\v\c\5\a\p\q\6\v\r\l\6\d\w\8\l\p\f\l\c\w\7\6\m\y\7\p\x\9\h\3\y\z\f\s\e\q\g\m\0\r\i\7\o\s\2\f\q\a\t\s\a\c\4\p\7\f\3\a\g\n\i\8\c\h\z\r\u\w\c\l\a\w\x\o\t\h\m\x\p\s\0\d\u\3\9\q\5\n\o\h\f\7\o\r\7\n\g\q\u\4\h\7\8\4\m\z\c\4\a\m\2\9\z\7\y\r\e\d\0\p\d\0\r\d\4\r\0\o\e\0\9\y\y\s\p\s\n\0\1\p\b\x\c\l\9\a\y\v\j\5\y\6\i\y\k\4\1\b\d\a\1\5\t\u\q\r\k\8\t\8\j\b\a\m\7\e\z\7\s\v\h\o\o\e\d\h\x\d\d\o\l\m\w\s\d\k\z\0\q\a\j\v\d\s\y\x\3\y\e ]] 00:06:28.538 00:06:28.538 real 0m4.471s 00:06:28.538 user 0m2.406s 00:06:28.538 sys 0m1.081s 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:28.538 ************************************ 00:06:28.538 END TEST spdk_dd_posix 00:06:28.538 ************************************ 00:06:28.538 00:06:28.538 real 0m20.239s 00:06:28.538 user 0m9.776s 00:06:28.538 sys 0m6.382s 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.538 09:56:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:28.538 09:56:00 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:28.538 09:56:00 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.538 09:56:00 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.538 09:56:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:28.538 ************************************ 00:06:28.538 START TEST spdk_dd_malloc 00:06:28.538 ************************************ 00:06:28.538 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:28.538 * Looking for test storage... 00:06:28.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:28.538 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.538 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.538 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:28.796 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.797 --rc genhtml_branch_coverage=1 00:06:28.797 --rc genhtml_function_coverage=1 00:06:28.797 --rc genhtml_legend=1 00:06:28.797 --rc geninfo_all_blocks=1 00:06:28.797 --rc geninfo_unexecuted_blocks=1 00:06:28.797 00:06:28.797 ' 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.797 --rc genhtml_branch_coverage=1 00:06:28.797 --rc genhtml_function_coverage=1 00:06:28.797 --rc genhtml_legend=1 00:06:28.797 --rc geninfo_all_blocks=1 00:06:28.797 --rc geninfo_unexecuted_blocks=1 00:06:28.797 00:06:28.797 ' 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.797 --rc genhtml_branch_coverage=1 00:06:28.797 --rc genhtml_function_coverage=1 00:06:28.797 --rc genhtml_legend=1 00:06:28.797 --rc geninfo_all_blocks=1 00:06:28.797 --rc geninfo_unexecuted_blocks=1 00:06:28.797 00:06:28.797 ' 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.797 --rc genhtml_branch_coverage=1 00:06:28.797 --rc genhtml_function_coverage=1 00:06:28.797 --rc genhtml_legend=1 00:06:28.797 --rc geninfo_all_blocks=1 00:06:28.797 --rc geninfo_unexecuted_blocks=1 00:06:28.797 00:06:28.797 ' 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:28.797 ************************************ 00:06:28.797 START TEST dd_malloc_copy 00:06:28.797 ************************************ 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:28.797 09:56:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.797 { 00:06:28.797 "subsystems": [ 00:06:28.797 { 00:06:28.797 "subsystem": "bdev", 00:06:28.797 "config": [ 00:06:28.797 { 00:06:28.797 "params": { 00:06:28.797 "block_size": 512, 00:06:28.797 "num_blocks": 1048576, 00:06:28.797 "name": "malloc0" 00:06:28.797 }, 00:06:28.797 "method": "bdev_malloc_create" 00:06:28.797 }, 00:06:28.797 { 00:06:28.797 "params": { 00:06:28.797 "block_size": 512, 00:06:28.797 "num_blocks": 1048576, 00:06:28.797 "name": "malloc1" 00:06:28.797 }, 00:06:28.797 "method": "bdev_malloc_create" 00:06:28.797 }, 00:06:28.797 { 00:06:28.797 "method": "bdev_wait_for_examine" 00:06:28.797 } 00:06:28.797 ] 00:06:28.797 } 00:06:28.797 ] 00:06:28.797 } 00:06:28.797 [2024-11-04 09:56:00.804746] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:28.797 [2024-11-04 09:56:00.805479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60650 ] 00:06:28.797 [2024-11-04 09:56:00.954321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.055 [2024-11-04 09:56:01.009958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.055 [2024-11-04 09:56:01.063157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.427  [2024-11-04T09:56:03.549Z] Copying: 199/512 [MB] (199 MBps) [2024-11-04T09:56:04.116Z] Copying: 398/512 [MB] (198 MBps) [2024-11-04T09:56:04.689Z] Copying: 512/512 [MB] (average 199 MBps) 00:06:32.519 00:06:32.519 09:56:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:32.519 09:56:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:32.519 09:56:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:32.519 09:56:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:32.519 [2024-11-04 09:56:04.549431] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:32.519 [2024-11-04 09:56:04.549523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60699 ] 00:06:32.519 { 00:06:32.519 "subsystems": [ 00:06:32.519 { 00:06:32.519 "subsystem": "bdev", 00:06:32.519 "config": [ 00:06:32.519 { 00:06:32.519 "params": { 00:06:32.519 "block_size": 512, 00:06:32.519 "num_blocks": 1048576, 00:06:32.519 "name": "malloc0" 00:06:32.519 }, 00:06:32.519 "method": "bdev_malloc_create" 00:06:32.519 }, 00:06:32.519 { 00:06:32.519 "params": { 00:06:32.519 "block_size": 512, 00:06:32.519 "num_blocks": 1048576, 00:06:32.519 "name": "malloc1" 00:06:32.519 }, 00:06:32.519 "method": "bdev_malloc_create" 00:06:32.519 }, 00:06:32.519 { 00:06:32.519 "method": "bdev_wait_for_examine" 00:06:32.519 } 00:06:32.519 ] 00:06:32.519 } 00:06:32.519 ] 00:06:32.519 } 00:06:32.777 [2024-11-04 09:56:04.689771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.777 [2024-11-04 09:56:04.751246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.777 [2024-11-04 09:56:04.804327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.151  [2024-11-04T09:56:07.254Z] Copying: 201/512 [MB] (201 MBps) [2024-11-04T09:56:07.819Z] Copying: 402/512 [MB] (201 MBps) [2024-11-04T09:56:08.385Z] Copying: 512/512 [MB] (average 201 MBps) 00:06:36.215 00:06:36.215 00:06:36.215 real 0m7.496s 00:06:36.215 user 0m6.534s 00:06:36.215 sys 0m0.791s 00:06:36.215 09:56:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.215 ************************************ 00:06:36.215 END TEST dd_malloc_copy 00:06:36.215 09:56:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.215 ************************************ 00:06:36.215 ************************************ 00:06:36.215 END TEST spdk_dd_malloc 00:06:36.215 ************************************ 00:06:36.215 00:06:36.215 real 0m7.732s 00:06:36.215 user 0m6.663s 00:06:36.215 sys 0m0.902s 00:06:36.215 09:56:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.215 09:56:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:36.215 09:56:08 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:36.215 09:56:08 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:36.215 09:56:08 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.215 09:56:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:36.215 ************************************ 00:06:36.215 START TEST spdk_dd_bdev_to_bdev 00:06:36.215 ************************************ 00:06:36.215 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:36.474 * Looking for test storage... 00:06:36.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.474 --rc genhtml_branch_coverage=1 00:06:36.474 --rc genhtml_function_coverage=1 00:06:36.474 --rc genhtml_legend=1 00:06:36.474 --rc geninfo_all_blocks=1 00:06:36.474 --rc geninfo_unexecuted_blocks=1 00:06:36.474 00:06:36.474 ' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.474 --rc genhtml_branch_coverage=1 00:06:36.474 --rc genhtml_function_coverage=1 00:06:36.474 --rc genhtml_legend=1 00:06:36.474 --rc geninfo_all_blocks=1 00:06:36.474 --rc geninfo_unexecuted_blocks=1 00:06:36.474 00:06:36.474 ' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.474 --rc genhtml_branch_coverage=1 00:06:36.474 --rc genhtml_function_coverage=1 00:06:36.474 --rc genhtml_legend=1 00:06:36.474 --rc geninfo_all_blocks=1 00:06:36.474 --rc geninfo_unexecuted_blocks=1 00:06:36.474 00:06:36.474 ' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.474 --rc genhtml_branch_coverage=1 00:06:36.474 --rc genhtml_function_coverage=1 00:06:36.474 --rc genhtml_legend=1 00:06:36.474 --rc geninfo_all_blocks=1 00:06:36.474 --rc geninfo_unexecuted_blocks=1 00:06:36.474 00:06:36.474 ' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:36.474 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:36.475 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:36.475 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:36.475 ************************************ 00:06:36.475 START TEST dd_inflate_file 00:06:36.475 ************************************ 00:06:36.475 09:56:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:36.475 [2024-11-04 09:56:08.602495] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:36.475 [2024-11-04 09:56:08.602763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60812 ] 00:06:36.733 [2024-11-04 09:56:08.753176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.733 [2024-11-04 09:56:08.817514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.733 [2024-11-04 09:56:08.875731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.991  [2024-11-04T09:56:09.161Z] Copying: 64/64 [MB] (average 1729 MBps) 00:06:36.991 00:06:36.991 00:06:36.991 real 0m0.589s 00:06:36.991 user 0m0.332s 00:06:36.991 sys 0m0.307s 00:06:36.991 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:36.991 ************************************ 00:06:36.991 END TEST dd_inflate_file 00:06:36.991 ************************************ 00:06:36.991 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:37.249 ************************************ 00:06:37.249 START TEST dd_copy_to_out_bdev 00:06:37.249 ************************************ 00:06:37.249 09:56:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:37.249 { 00:06:37.249 "subsystems": [ 00:06:37.249 { 00:06:37.249 "subsystem": "bdev", 00:06:37.249 "config": [ 00:06:37.249 { 00:06:37.249 "params": { 00:06:37.249 "trtype": "pcie", 00:06:37.249 "traddr": "0000:00:10.0", 00:06:37.249 "name": "Nvme0" 00:06:37.249 }, 00:06:37.249 "method": "bdev_nvme_attach_controller" 00:06:37.249 }, 00:06:37.249 { 00:06:37.249 "params": { 00:06:37.249 "trtype": "pcie", 00:06:37.249 "traddr": "0000:00:11.0", 00:06:37.249 "name": "Nvme1" 00:06:37.249 }, 00:06:37.249 "method": "bdev_nvme_attach_controller" 00:06:37.249 }, 00:06:37.249 { 00:06:37.249 "method": "bdev_wait_for_examine" 00:06:37.249 } 00:06:37.249 ] 00:06:37.249 } 00:06:37.249 ] 00:06:37.249 } 00:06:37.249 [2024-11-04 09:56:09.240853] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:37.249 [2024-11-04 09:56:09.240977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60851 ] 00:06:37.249 [2024-11-04 09:56:09.386119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.507 [2024-11-04 09:56:09.451439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.507 [2024-11-04 09:56:09.507903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.881  [2024-11-04T09:56:11.051Z] Copying: 59/64 [MB] (59 MBps) [2024-11-04T09:56:11.051Z] Copying: 64/64 [MB] (average 59 MBps) 00:06:38.881 00:06:38.881 ************************************ 00:06:38.881 END TEST dd_copy_to_out_bdev 00:06:38.881 ************************************ 00:06:38.881 00:06:38.881 real 0m1.811s 00:06:38.881 user 0m1.575s 00:06:38.881 sys 0m1.432s 00:06:38.881 09:56:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.881 09:56:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:38.881 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:38.881 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:38.881 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.881 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.881 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:39.139 ************************************ 00:06:39.139 START TEST dd_offset_magic 00:06:39.139 ************************************ 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:39.139 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:39.139 [2024-11-04 09:56:11.105060] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:39.139 [2024-11-04 09:56:11.105143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:06:39.139 { 00:06:39.139 "subsystems": [ 00:06:39.139 { 00:06:39.139 "subsystem": "bdev", 00:06:39.139 "config": [ 00:06:39.139 { 00:06:39.139 "params": { 00:06:39.139 "trtype": "pcie", 00:06:39.139 "traddr": "0000:00:10.0", 00:06:39.139 "name": "Nvme0" 00:06:39.139 }, 00:06:39.139 "method": "bdev_nvme_attach_controller" 00:06:39.139 }, 00:06:39.139 { 00:06:39.139 "params": { 00:06:39.139 "trtype": "pcie", 00:06:39.139 "traddr": "0000:00:11.0", 00:06:39.139 "name": "Nvme1" 00:06:39.139 }, 00:06:39.139 "method": "bdev_nvme_attach_controller" 00:06:39.139 }, 00:06:39.139 { 00:06:39.139 "method": "bdev_wait_for_examine" 00:06:39.139 } 00:06:39.139 ] 00:06:39.139 } 00:06:39.139 ] 00:06:39.139 } 00:06:39.139 [2024-11-04 09:56:11.251582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.397 [2024-11-04 09:56:11.322158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.397 [2024-11-04 09:56:11.382813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.661  [2024-11-04T09:56:12.105Z] Copying: 65/65 [MB] (average 915 MBps) 00:06:39.935 00:06:39.935 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:39.935 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:39.935 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:39.935 09:56:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:39.935 { 00:06:39.935 "subsystems": [ 00:06:39.935 { 00:06:39.935 "subsystem": "bdev", 00:06:39.935 "config": [ 00:06:39.935 { 00:06:39.935 "params": { 00:06:39.935 "trtype": "pcie", 00:06:39.935 "traddr": "0000:00:10.0", 00:06:39.935 "name": "Nvme0" 00:06:39.935 }, 00:06:39.935 "method": "bdev_nvme_attach_controller" 00:06:39.935 }, 00:06:39.935 { 00:06:39.935 "params": { 00:06:39.935 "trtype": "pcie", 00:06:39.935 "traddr": "0000:00:11.0", 00:06:39.935 "name": "Nvme1" 00:06:39.935 }, 00:06:39.935 "method": "bdev_nvme_attach_controller" 00:06:39.935 }, 00:06:39.935 { 00:06:39.935 "method": "bdev_wait_for_examine" 00:06:39.935 } 00:06:39.935 ] 00:06:39.935 } 00:06:39.935 ] 00:06:39.935 } 00:06:39.935 [2024-11-04 09:56:11.957258] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:39.935 [2024-11-04 09:56:11.957400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60910 ] 00:06:40.194 [2024-11-04 09:56:12.107410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.194 [2024-11-04 09:56:12.166121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.194 [2024-11-04 09:56:12.222879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.452  [2024-11-04T09:56:12.622Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:40.452 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:40.452 09:56:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:40.711 { 00:06:40.711 "subsystems": [ 00:06:40.711 { 00:06:40.711 "subsystem": "bdev", 00:06:40.711 "config": [ 00:06:40.711 { 00:06:40.711 "params": { 00:06:40.711 "trtype": "pcie", 00:06:40.711 "traddr": "0000:00:10.0", 00:06:40.711 "name": "Nvme0" 00:06:40.711 }, 00:06:40.711 "method": "bdev_nvme_attach_controller" 00:06:40.711 }, 00:06:40.711 { 00:06:40.711 "params": { 00:06:40.711 "trtype": "pcie", 00:06:40.711 "traddr": "0000:00:11.0", 00:06:40.711 "name": "Nvme1" 00:06:40.711 }, 00:06:40.711 "method": "bdev_nvme_attach_controller" 00:06:40.711 }, 00:06:40.711 { 00:06:40.711 "method": "bdev_wait_for_examine" 00:06:40.711 } 00:06:40.711 ] 00:06:40.711 } 00:06:40.711 ] 00:06:40.711 } 00:06:40.711 [2024-11-04 09:56:12.663841] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:40.711 [2024-11-04 09:56:12.664192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60927 ] 00:06:40.711 [2024-11-04 09:56:12.816677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.711 [2024-11-04 09:56:12.879678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.969 [2024-11-04 09:56:12.935789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.228  [2024-11-04T09:56:13.656Z] Copying: 65/65 [MB] (average 1031 MBps) 00:06:41.486 00:06:41.486 09:56:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:41.486 09:56:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:41.486 09:56:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:41.486 09:56:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:41.486 [2024-11-04 09:56:13.476507] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:41.486 [2024-11-04 09:56:13.476638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60947 ] 00:06:41.486 { 00:06:41.486 "subsystems": [ 00:06:41.486 { 00:06:41.486 "subsystem": "bdev", 00:06:41.486 "config": [ 00:06:41.486 { 00:06:41.486 "params": { 00:06:41.486 "trtype": "pcie", 00:06:41.486 "traddr": "0000:00:10.0", 00:06:41.486 "name": "Nvme0" 00:06:41.486 }, 00:06:41.486 "method": "bdev_nvme_attach_controller" 00:06:41.486 }, 00:06:41.486 { 00:06:41.486 "params": { 00:06:41.486 "trtype": "pcie", 00:06:41.486 "traddr": "0000:00:11.0", 00:06:41.486 "name": "Nvme1" 00:06:41.486 }, 00:06:41.486 "method": "bdev_nvme_attach_controller" 00:06:41.486 }, 00:06:41.486 { 00:06:41.486 "method": "bdev_wait_for_examine" 00:06:41.486 } 00:06:41.486 ] 00:06:41.486 } 00:06:41.486 ] 00:06:41.486 } 00:06:41.486 [2024-11-04 09:56:13.623889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.744 [2024-11-04 09:56:13.688072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.744 [2024-11-04 09:56:13.744942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.002  [2024-11-04T09:56:14.172Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:42.002 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:42.002 00:06:42.002 real 0m3.062s 00:06:42.002 user 0m2.220s 00:06:42.002 sys 0m0.924s 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.002 ************************************ 00:06:42.002 END TEST dd_offset_magic 00:06:42.002 ************************************ 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:42.002 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:42.259 [2024-11-04 09:56:14.220033] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:42.259 [2024-11-04 09:56:14.220357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60984 ] 00:06:42.259 { 00:06:42.259 "subsystems": [ 00:06:42.259 { 00:06:42.259 "subsystem": "bdev", 00:06:42.259 "config": [ 00:06:42.259 { 00:06:42.260 "params": { 00:06:42.260 "trtype": "pcie", 00:06:42.260 "traddr": "0000:00:10.0", 00:06:42.260 "name": "Nvme0" 00:06:42.260 }, 00:06:42.260 "method": "bdev_nvme_attach_controller" 00:06:42.260 }, 00:06:42.260 { 00:06:42.260 "params": { 00:06:42.260 "trtype": "pcie", 00:06:42.260 "traddr": "0000:00:11.0", 00:06:42.260 "name": "Nvme1" 00:06:42.260 }, 00:06:42.260 "method": "bdev_nvme_attach_controller" 00:06:42.260 }, 00:06:42.260 { 00:06:42.260 "method": "bdev_wait_for_examine" 00:06:42.260 } 00:06:42.260 ] 00:06:42.260 } 00:06:42.260 ] 00:06:42.260 } 00:06:42.260 [2024-11-04 09:56:14.368156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.517 [2024-11-04 09:56:14.432400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.517 [2024-11-04 09:56:14.489781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.517  [2024-11-04T09:56:14.945Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:42.775 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:42.775 09:56:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:42.775 [2024-11-04 09:56:14.929192] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:42.775 [2024-11-04 09:56:14.929296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60994 ] 00:06:42.775 { 00:06:42.775 "subsystems": [ 00:06:42.775 { 00:06:42.775 "subsystem": "bdev", 00:06:42.775 "config": [ 00:06:42.775 { 00:06:42.775 "params": { 00:06:42.775 "trtype": "pcie", 00:06:42.775 "traddr": "0000:00:10.0", 00:06:42.775 "name": "Nvme0" 00:06:42.775 }, 00:06:42.775 "method": "bdev_nvme_attach_controller" 00:06:42.775 }, 00:06:42.775 { 00:06:42.775 "params": { 00:06:42.775 "trtype": "pcie", 00:06:42.775 "traddr": "0000:00:11.0", 00:06:42.775 "name": "Nvme1" 00:06:42.775 }, 00:06:42.775 "method": "bdev_nvme_attach_controller" 00:06:42.775 }, 00:06:42.775 { 00:06:42.775 "method": "bdev_wait_for_examine" 00:06:42.775 } 00:06:42.775 ] 00:06:42.775 } 00:06:42.775 ] 00:06:42.775 } 00:06:43.033 [2024-11-04 09:56:15.074553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.033 [2024-11-04 09:56:15.130602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.033 [2024-11-04 09:56:15.184950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.290  [2024-11-04T09:56:15.718Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:06:43.548 00:06:43.548 09:56:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:43.548 ************************************ 00:06:43.548 END TEST spdk_dd_bdev_to_bdev 00:06:43.548 ************************************ 00:06:43.548 00:06:43.548 real 0m7.266s 00:06:43.548 user 0m5.305s 00:06:43.548 sys 0m3.405s 00:06:43.548 09:56:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.548 09:56:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:43.548 09:56:15 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:43.548 09:56:15 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:43.548 09:56:15 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.548 09:56:15 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.548 09:56:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.548 ************************************ 00:06:43.548 START TEST spdk_dd_uring 00:06:43.548 ************************************ 00:06:43.548 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:43.807 * Looking for test storage... 00:06:43.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.807 --rc genhtml_branch_coverage=1 00:06:43.807 --rc genhtml_function_coverage=1 00:06:43.807 --rc genhtml_legend=1 00:06:43.807 --rc geninfo_all_blocks=1 00:06:43.807 --rc geninfo_unexecuted_blocks=1 00:06:43.807 00:06:43.807 ' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.807 --rc genhtml_branch_coverage=1 00:06:43.807 --rc genhtml_function_coverage=1 00:06:43.807 --rc genhtml_legend=1 00:06:43.807 --rc geninfo_all_blocks=1 00:06:43.807 --rc geninfo_unexecuted_blocks=1 00:06:43.807 00:06:43.807 ' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.807 --rc genhtml_branch_coverage=1 00:06:43.807 --rc genhtml_function_coverage=1 00:06:43.807 --rc genhtml_legend=1 00:06:43.807 --rc geninfo_all_blocks=1 00:06:43.807 --rc geninfo_unexecuted_blocks=1 00:06:43.807 00:06:43.807 ' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.807 --rc genhtml_branch_coverage=1 00:06:43.807 --rc genhtml_function_coverage=1 00:06:43.807 --rc genhtml_legend=1 00:06:43.807 --rc geninfo_all_blocks=1 00:06:43.807 --rc geninfo_unexecuted_blocks=1 00:06:43.807 00:06:43.807 ' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.807 09:56:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:43.807 ************************************ 00:06:43.807 START TEST dd_uring_copy 00:06:43.807 ************************************ 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=4trhu94tlq1rssq6wancvczhftdnifiez7l2ayxfh5b1zs40ybzgdz77xjryx1y89mthgyt3zezd2d5lkpgqvfaadguvubgyy038p9squem4npsxzq42ifm0opc9znfkn1qyt6bx3ro4qxm81ow2x3ucugw57qpsz39e0b8ioo4b8qbeiv5racb5c2zs3lftod3y1n65vmle3v8cbv5q8ihctau1xi6tbo9cihbbjw5cfh8jjqsiirfpoycit2udzv96ce7bpy6iw0yhfx9ez64pgisuq3j6pi8i2fgo2jrghhvtrz8lmvyv8vmc3xxrbpuscse4eixk93arc3bdhneeji4asme9ak0p9vitoiw9mo5gf0oxrzuogmujq33wjnq0un3zqhanaah1z9kld9t9yn0g9fxq65bvqqhetskmazu1plh7o2x69aewmq866mmnkc7u78vfjh8y7uutkh1u0r2zke8z07w0lyn76t1do2ibk7hrunbe868xf4vu6ozyv6dwb73rvh15hn2t2fdq2vt2vbtt3xfn2vow951itcxaadfg21yrulls9uunns5p2atbaip0da8grej8h0dzeuce8255afhttvzp0ds7tyzqwiifcrepinsxeq7jh1licmo2603eox0x1wnasy4g4t6wwjhkt7dxhc5mqao6mnt4wnjqyk7k6rhumlubdhxrrvmhvpxaa4g0bknsjbdtui289080k7chyx8xbghj3uc4cmy085u4wo4feuzc8m6sh5h0rd12r0u49dgqcnk8ji4n9596czd0vrh1bpxmhjq03m6gses52lort8n013kf5cmsmsvlaus7sdj73pi8ush22bwibbx29mnflpj4uhflae4n8vdc1r0olrrcpxfr38w1phlaefkjwsac8t3iz0viko7yyc859f3m3orq9momhs7hri28dik1b9s570ei0f1c8wh799icwdiy5asn23996nskeb13frhrwbpe012411r98rk4vc6724za 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 4trhu94tlq1rssq6wancvczhftdnifiez7l2ayxfh5b1zs40ybzgdz77xjryx1y89mthgyt3zezd2d5lkpgqvfaadguvubgyy038p9squem4npsxzq42ifm0opc9znfkn1qyt6bx3ro4qxm81ow2x3ucugw57qpsz39e0b8ioo4b8qbeiv5racb5c2zs3lftod3y1n65vmle3v8cbv5q8ihctau1xi6tbo9cihbbjw5cfh8jjqsiirfpoycit2udzv96ce7bpy6iw0yhfx9ez64pgisuq3j6pi8i2fgo2jrghhvtrz8lmvyv8vmc3xxrbpuscse4eixk93arc3bdhneeji4asme9ak0p9vitoiw9mo5gf0oxrzuogmujq33wjnq0un3zqhanaah1z9kld9t9yn0g9fxq65bvqqhetskmazu1plh7o2x69aewmq866mmnkc7u78vfjh8y7uutkh1u0r2zke8z07w0lyn76t1do2ibk7hrunbe868xf4vu6ozyv6dwb73rvh15hn2t2fdq2vt2vbtt3xfn2vow951itcxaadfg21yrulls9uunns5p2atbaip0da8grej8h0dzeuce8255afhttvzp0ds7tyzqwiifcrepinsxeq7jh1licmo2603eox0x1wnasy4g4t6wwjhkt7dxhc5mqao6mnt4wnjqyk7k6rhumlubdhxrrvmhvpxaa4g0bknsjbdtui289080k7chyx8xbghj3uc4cmy085u4wo4feuzc8m6sh5h0rd12r0u49dgqcnk8ji4n9596czd0vrh1bpxmhjq03m6gses52lort8n013kf5cmsmsvlaus7sdj73pi8ush22bwibbx29mnflpj4uhflae4n8vdc1r0olrrcpxfr38w1phlaefkjwsac8t3iz0viko7yyc859f3m3orq9momhs7hri28dik1b9s570ei0f1c8wh799icwdiy5asn23996nskeb13frhrwbpe012411r98rk4vc6724za 00:06:43.808 09:56:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:43.808 [2024-11-04 09:56:15.928412] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:43.808 [2024-11-04 09:56:15.928713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61072 ] 00:06:44.066 [2024-11-04 09:56:16.070617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.066 [2024-11-04 09:56:16.133961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.066 [2024-11-04 09:56:16.190639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.055  [2024-11-04T09:56:17.483Z] Copying: 511/511 [MB] (average 1053 MBps) 00:06:45.313 00:06:45.313 09:56:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:45.313 09:56:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:45.313 09:56:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:45.313 09:56:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:45.313 [2024-11-04 09:56:17.328209] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:45.313 [2024-11-04 09:56:17.328624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61099 ] 00:06:45.313 { 00:06:45.313 "subsystems": [ 00:06:45.313 { 00:06:45.313 "subsystem": "bdev", 00:06:45.313 "config": [ 00:06:45.313 { 00:06:45.313 "params": { 00:06:45.313 "block_size": 512, 00:06:45.313 "num_blocks": 1048576, 00:06:45.313 "name": "malloc0" 00:06:45.313 }, 00:06:45.313 "method": "bdev_malloc_create" 00:06:45.313 }, 00:06:45.313 { 00:06:45.313 "params": { 00:06:45.313 "filename": "/dev/zram1", 00:06:45.313 "name": "uring0" 00:06:45.313 }, 00:06:45.313 "method": "bdev_uring_create" 00:06:45.313 }, 00:06:45.313 { 00:06:45.313 "method": "bdev_wait_for_examine" 00:06:45.313 } 00:06:45.313 ] 00:06:45.313 } 00:06:45.313 ] 00:06:45.313 } 00:06:45.313 [2024-11-04 09:56:17.471490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.570 [2024-11-04 09:56:17.533001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.570 [2024-11-04 09:56:17.591178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.941  [2024-11-04T09:56:20.045Z] Copying: 235/512 [MB] (235 MBps) [2024-11-04T09:56:20.045Z] Copying: 473/512 [MB] (238 MBps) [2024-11-04T09:56:20.612Z] Copying: 512/512 [MB] (average 236 MBps) 00:06:48.442 00:06:48.442 09:56:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:48.442 09:56:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:48.442 09:56:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:48.442 09:56:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:48.442 [2024-11-04 09:56:20.389276] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:48.442 [2024-11-04 09:56:20.389627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61143 ] 00:06:48.442 { 00:06:48.442 "subsystems": [ 00:06:48.442 { 00:06:48.442 "subsystem": "bdev", 00:06:48.442 "config": [ 00:06:48.442 { 00:06:48.442 "params": { 00:06:48.442 "block_size": 512, 00:06:48.442 "num_blocks": 1048576, 00:06:48.442 "name": "malloc0" 00:06:48.442 }, 00:06:48.442 "method": "bdev_malloc_create" 00:06:48.442 }, 00:06:48.442 { 00:06:48.442 "params": { 00:06:48.442 "filename": "/dev/zram1", 00:06:48.442 "name": "uring0" 00:06:48.442 }, 00:06:48.442 "method": "bdev_uring_create" 00:06:48.442 }, 00:06:48.442 { 00:06:48.442 "method": "bdev_wait_for_examine" 00:06:48.442 } 00:06:48.442 ] 00:06:48.442 } 00:06:48.442 ] 00:06:48.442 } 00:06:48.442 [2024-11-04 09:56:20.537715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.442 [2024-11-04 09:56:20.608722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.700 [2024-11-04 09:56:20.669394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.073  [2024-11-04T09:56:23.177Z] Copying: 178/512 [MB] (178 MBps) [2024-11-04T09:56:24.111Z] Copying: 355/512 [MB] (177 MBps) [2024-11-04T09:56:24.369Z] Copying: 512/512 [MB] (average 173 MBps) 00:06:52.199 00:06:52.199 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:52.199 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 4trhu94tlq1rssq6wancvczhftdnifiez7l2ayxfh5b1zs40ybzgdz77xjryx1y89mthgyt3zezd2d5lkpgqvfaadguvubgyy038p9squem4npsxzq42ifm0opc9znfkn1qyt6bx3ro4qxm81ow2x3ucugw57qpsz39e0b8ioo4b8qbeiv5racb5c2zs3lftod3y1n65vmle3v8cbv5q8ihctau1xi6tbo9cihbbjw5cfh8jjqsiirfpoycit2udzv96ce7bpy6iw0yhfx9ez64pgisuq3j6pi8i2fgo2jrghhvtrz8lmvyv8vmc3xxrbpuscse4eixk93arc3bdhneeji4asme9ak0p9vitoiw9mo5gf0oxrzuogmujq33wjnq0un3zqhanaah1z9kld9t9yn0g9fxq65bvqqhetskmazu1plh7o2x69aewmq866mmnkc7u78vfjh8y7uutkh1u0r2zke8z07w0lyn76t1do2ibk7hrunbe868xf4vu6ozyv6dwb73rvh15hn2t2fdq2vt2vbtt3xfn2vow951itcxaadfg21yrulls9uunns5p2atbaip0da8grej8h0dzeuce8255afhttvzp0ds7tyzqwiifcrepinsxeq7jh1licmo2603eox0x1wnasy4g4t6wwjhkt7dxhc5mqao6mnt4wnjqyk7k6rhumlubdhxrrvmhvpxaa4g0bknsjbdtui289080k7chyx8xbghj3uc4cmy085u4wo4feuzc8m6sh5h0rd12r0u49dgqcnk8ji4n9596czd0vrh1bpxmhjq03m6gses52lort8n013kf5cmsmsvlaus7sdj73pi8ush22bwibbx29mnflpj4uhflae4n8vdc1r0olrrcpxfr38w1phlaefkjwsac8t3iz0viko7yyc859f3m3orq9momhs7hri28dik1b9s570ei0f1c8wh799icwdiy5asn23996nskeb13frhrwbpe012411r98rk4vc6724za == \4\t\r\h\u\9\4\t\l\q\1\r\s\s\q\6\w\a\n\c\v\c\z\h\f\t\d\n\i\f\i\e\z\7\l\2\a\y\x\f\h\5\b\1\z\s\4\0\y\b\z\g\d\z\7\7\x\j\r\y\x\1\y\8\9\m\t\h\g\y\t\3\z\e\z\d\2\d\5\l\k\p\g\q\v\f\a\a\d\g\u\v\u\b\g\y\y\0\3\8\p\9\s\q\u\e\m\4\n\p\s\x\z\q\4\2\i\f\m\0\o\p\c\9\z\n\f\k\n\1\q\y\t\6\b\x\3\r\o\4\q\x\m\8\1\o\w\2\x\3\u\c\u\g\w\5\7\q\p\s\z\3\9\e\0\b\8\i\o\o\4\b\8\q\b\e\i\v\5\r\a\c\b\5\c\2\z\s\3\l\f\t\o\d\3\y\1\n\6\5\v\m\l\e\3\v\8\c\b\v\5\q\8\i\h\c\t\a\u\1\x\i\6\t\b\o\9\c\i\h\b\b\j\w\5\c\f\h\8\j\j\q\s\i\i\r\f\p\o\y\c\i\t\2\u\d\z\v\9\6\c\e\7\b\p\y\6\i\w\0\y\h\f\x\9\e\z\6\4\p\g\i\s\u\q\3\j\6\p\i\8\i\2\f\g\o\2\j\r\g\h\h\v\t\r\z\8\l\m\v\y\v\8\v\m\c\3\x\x\r\b\p\u\s\c\s\e\4\e\i\x\k\9\3\a\r\c\3\b\d\h\n\e\e\j\i\4\a\s\m\e\9\a\k\0\p\9\v\i\t\o\i\w\9\m\o\5\g\f\0\o\x\r\z\u\o\g\m\u\j\q\3\3\w\j\n\q\0\u\n\3\z\q\h\a\n\a\a\h\1\z\9\k\l\d\9\t\9\y\n\0\g\9\f\x\q\6\5\b\v\q\q\h\e\t\s\k\m\a\z\u\1\p\l\h\7\o\2\x\6\9\a\e\w\m\q\8\6\6\m\m\n\k\c\7\u\7\8\v\f\j\h\8\y\7\u\u\t\k\h\1\u\0\r\2\z\k\e\8\z\0\7\w\0\l\y\n\7\6\t\1\d\o\2\i\b\k\7\h\r\u\n\b\e\8\6\8\x\f\4\v\u\6\o\z\y\v\6\d\w\b\7\3\r\v\h\1\5\h\n\2\t\2\f\d\q\2\v\t\2\v\b\t\t\3\x\f\n\2\v\o\w\9\5\1\i\t\c\x\a\a\d\f\g\2\1\y\r\u\l\l\s\9\u\u\n\n\s\5\p\2\a\t\b\a\i\p\0\d\a\8\g\r\e\j\8\h\0\d\z\e\u\c\e\8\2\5\5\a\f\h\t\t\v\z\p\0\d\s\7\t\y\z\q\w\i\i\f\c\r\e\p\i\n\s\x\e\q\7\j\h\1\l\i\c\m\o\2\6\0\3\e\o\x\0\x\1\w\n\a\s\y\4\g\4\t\6\w\w\j\h\k\t\7\d\x\h\c\5\m\q\a\o\6\m\n\t\4\w\n\j\q\y\k\7\k\6\r\h\u\m\l\u\b\d\h\x\r\r\v\m\h\v\p\x\a\a\4\g\0\b\k\n\s\j\b\d\t\u\i\2\8\9\0\8\0\k\7\c\h\y\x\8\x\b\g\h\j\3\u\c\4\c\m\y\0\8\5\u\4\w\o\4\f\e\u\z\c\8\m\6\s\h\5\h\0\r\d\1\2\r\0\u\4\9\d\g\q\c\n\k\8\j\i\4\n\9\5\9\6\c\z\d\0\v\r\h\1\b\p\x\m\h\j\q\0\3\m\6\g\s\e\s\5\2\l\o\r\t\8\n\0\1\3\k\f\5\c\m\s\m\s\v\l\a\u\s\7\s\d\j\7\3\p\i\8\u\s\h\2\2\b\w\i\b\b\x\2\9\m\n\f\l\p\j\4\u\h\f\l\a\e\4\n\8\v\d\c\1\r\0\o\l\r\r\c\p\x\f\r\3\8\w\1\p\h\l\a\e\f\k\j\w\s\a\c\8\t\3\i\z\0\v\i\k\o\7\y\y\c\8\5\9\f\3\m\3\o\r\q\9\m\o\m\h\s\7\h\r\i\2\8\d\i\k\1\b\9\s\5\7\0\e\i\0\f\1\c\8\w\h\7\9\9\i\c\w\d\i\y\5\a\s\n\2\3\9\9\6\n\s\k\e\b\1\3\f\r\h\r\w\b\p\e\0\1\2\4\1\1\r\9\8\r\k\4\v\c\6\7\2\4\z\a ]] 00:06:52.199 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:52.199 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 4trhu94tlq1rssq6wancvczhftdnifiez7l2ayxfh5b1zs40ybzgdz77xjryx1y89mthgyt3zezd2d5lkpgqvfaadguvubgyy038p9squem4npsxzq42ifm0opc9znfkn1qyt6bx3ro4qxm81ow2x3ucugw57qpsz39e0b8ioo4b8qbeiv5racb5c2zs3lftod3y1n65vmle3v8cbv5q8ihctau1xi6tbo9cihbbjw5cfh8jjqsiirfpoycit2udzv96ce7bpy6iw0yhfx9ez64pgisuq3j6pi8i2fgo2jrghhvtrz8lmvyv8vmc3xxrbpuscse4eixk93arc3bdhneeji4asme9ak0p9vitoiw9mo5gf0oxrzuogmujq33wjnq0un3zqhanaah1z9kld9t9yn0g9fxq65bvqqhetskmazu1plh7o2x69aewmq866mmnkc7u78vfjh8y7uutkh1u0r2zke8z07w0lyn76t1do2ibk7hrunbe868xf4vu6ozyv6dwb73rvh15hn2t2fdq2vt2vbtt3xfn2vow951itcxaadfg21yrulls9uunns5p2atbaip0da8grej8h0dzeuce8255afhttvzp0ds7tyzqwiifcrepinsxeq7jh1licmo2603eox0x1wnasy4g4t6wwjhkt7dxhc5mqao6mnt4wnjqyk7k6rhumlubdhxrrvmhvpxaa4g0bknsjbdtui289080k7chyx8xbghj3uc4cmy085u4wo4feuzc8m6sh5h0rd12r0u49dgqcnk8ji4n9596czd0vrh1bpxmhjq03m6gses52lort8n013kf5cmsmsvlaus7sdj73pi8ush22bwibbx29mnflpj4uhflae4n8vdc1r0olrrcpxfr38w1phlaefkjwsac8t3iz0viko7yyc859f3m3orq9momhs7hri28dik1b9s570ei0f1c8wh799icwdiy5asn23996nskeb13frhrwbpe012411r98rk4vc6724za == \4\t\r\h\u\9\4\t\l\q\1\r\s\s\q\6\w\a\n\c\v\c\z\h\f\t\d\n\i\f\i\e\z\7\l\2\a\y\x\f\h\5\b\1\z\s\4\0\y\b\z\g\d\z\7\7\x\j\r\y\x\1\y\8\9\m\t\h\g\y\t\3\z\e\z\d\2\d\5\l\k\p\g\q\v\f\a\a\d\g\u\v\u\b\g\y\y\0\3\8\p\9\s\q\u\e\m\4\n\p\s\x\z\q\4\2\i\f\m\0\o\p\c\9\z\n\f\k\n\1\q\y\t\6\b\x\3\r\o\4\q\x\m\8\1\o\w\2\x\3\u\c\u\g\w\5\7\q\p\s\z\3\9\e\0\b\8\i\o\o\4\b\8\q\b\e\i\v\5\r\a\c\b\5\c\2\z\s\3\l\f\t\o\d\3\y\1\n\6\5\v\m\l\e\3\v\8\c\b\v\5\q\8\i\h\c\t\a\u\1\x\i\6\t\b\o\9\c\i\h\b\b\j\w\5\c\f\h\8\j\j\q\s\i\i\r\f\p\o\y\c\i\t\2\u\d\z\v\9\6\c\e\7\b\p\y\6\i\w\0\y\h\f\x\9\e\z\6\4\p\g\i\s\u\q\3\j\6\p\i\8\i\2\f\g\o\2\j\r\g\h\h\v\t\r\z\8\l\m\v\y\v\8\v\m\c\3\x\x\r\b\p\u\s\c\s\e\4\e\i\x\k\9\3\a\r\c\3\b\d\h\n\e\e\j\i\4\a\s\m\e\9\a\k\0\p\9\v\i\t\o\i\w\9\m\o\5\g\f\0\o\x\r\z\u\o\g\m\u\j\q\3\3\w\j\n\q\0\u\n\3\z\q\h\a\n\a\a\h\1\z\9\k\l\d\9\t\9\y\n\0\g\9\f\x\q\6\5\b\v\q\q\h\e\t\s\k\m\a\z\u\1\p\l\h\7\o\2\x\6\9\a\e\w\m\q\8\6\6\m\m\n\k\c\7\u\7\8\v\f\j\h\8\y\7\u\u\t\k\h\1\u\0\r\2\z\k\e\8\z\0\7\w\0\l\y\n\7\6\t\1\d\o\2\i\b\k\7\h\r\u\n\b\e\8\6\8\x\f\4\v\u\6\o\z\y\v\6\d\w\b\7\3\r\v\h\1\5\h\n\2\t\2\f\d\q\2\v\t\2\v\b\t\t\3\x\f\n\2\v\o\w\9\5\1\i\t\c\x\a\a\d\f\g\2\1\y\r\u\l\l\s\9\u\u\n\n\s\5\p\2\a\t\b\a\i\p\0\d\a\8\g\r\e\j\8\h\0\d\z\e\u\c\e\8\2\5\5\a\f\h\t\t\v\z\p\0\d\s\7\t\y\z\q\w\i\i\f\c\r\e\p\i\n\s\x\e\q\7\j\h\1\l\i\c\m\o\2\6\0\3\e\o\x\0\x\1\w\n\a\s\y\4\g\4\t\6\w\w\j\h\k\t\7\d\x\h\c\5\m\q\a\o\6\m\n\t\4\w\n\j\q\y\k\7\k\6\r\h\u\m\l\u\b\d\h\x\r\r\v\m\h\v\p\x\a\a\4\g\0\b\k\n\s\j\b\d\t\u\i\2\8\9\0\8\0\k\7\c\h\y\x\8\x\b\g\h\j\3\u\c\4\c\m\y\0\8\5\u\4\w\o\4\f\e\u\z\c\8\m\6\s\h\5\h\0\r\d\1\2\r\0\u\4\9\d\g\q\c\n\k\8\j\i\4\n\9\5\9\6\c\z\d\0\v\r\h\1\b\p\x\m\h\j\q\0\3\m\6\g\s\e\s\5\2\l\o\r\t\8\n\0\1\3\k\f\5\c\m\s\m\s\v\l\a\u\s\7\s\d\j\7\3\p\i\8\u\s\h\2\2\b\w\i\b\b\x\2\9\m\n\f\l\p\j\4\u\h\f\l\a\e\4\n\8\v\d\c\1\r\0\o\l\r\r\c\p\x\f\r\3\8\w\1\p\h\l\a\e\f\k\j\w\s\a\c\8\t\3\i\z\0\v\i\k\o\7\y\y\c\8\5\9\f\3\m\3\o\r\q\9\m\o\m\h\s\7\h\r\i\2\8\d\i\k\1\b\9\s\5\7\0\e\i\0\f\1\c\8\w\h\7\9\9\i\c\w\d\i\y\5\a\s\n\2\3\9\9\6\n\s\k\e\b\1\3\f\r\h\r\w\b\p\e\0\1\2\4\1\1\r\9\8\r\k\4\v\c\6\7\2\4\z\a ]] 00:06:52.199 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:52.458 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:52.458 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:52.458 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:52.458 09:56:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:52.716 { 00:06:52.716 "subsystems": [ 00:06:52.716 { 00:06:52.716 "subsystem": "bdev", 00:06:52.716 "config": [ 00:06:52.716 { 00:06:52.716 "params": { 00:06:52.716 "block_size": 512, 00:06:52.716 "num_blocks": 1048576, 00:06:52.716 "name": "malloc0" 00:06:52.716 }, 00:06:52.716 "method": "bdev_malloc_create" 00:06:52.716 }, 00:06:52.717 { 00:06:52.717 "params": { 00:06:52.717 "filename": "/dev/zram1", 00:06:52.717 "name": "uring0" 00:06:52.717 }, 00:06:52.717 "method": "bdev_uring_create" 00:06:52.717 }, 00:06:52.717 { 00:06:52.717 "method": "bdev_wait_for_examine" 00:06:52.717 } 00:06:52.717 ] 00:06:52.717 } 00:06:52.717 ] 00:06:52.717 } 00:06:52.717 [2024-11-04 09:56:24.650757] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:52.717 [2024-11-04 09:56:24.650863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61211 ] 00:06:52.717 [2024-11-04 09:56:24.797344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.717 [2024-11-04 09:56:24.857851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.975 [2024-11-04 09:56:24.911900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.351  [2024-11-04T09:56:27.456Z] Copying: 152/512 [MB] (152 MBps) [2024-11-04T09:56:28.392Z] Copying: 305/512 [MB] (152 MBps) [2024-11-04T09:56:28.650Z] Copying: 454/512 [MB] (149 MBps) [2024-11-04T09:56:28.909Z] Copying: 512/512 [MB] (average 151 MBps) 00:06:56.739 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:56.739 09:56:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:56.997 [2024-11-04 09:56:28.940559] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:56.997 [2024-11-04 09:56:28.940688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61267 ] 00:06:56.997 { 00:06:56.997 "subsystems": [ 00:06:56.997 { 00:06:56.997 "subsystem": "bdev", 00:06:56.997 "config": [ 00:06:56.997 { 00:06:56.997 "params": { 00:06:56.997 "block_size": 512, 00:06:56.997 "num_blocks": 1048576, 00:06:56.997 "name": "malloc0" 00:06:56.997 }, 00:06:56.997 "method": "bdev_malloc_create" 00:06:56.997 }, 00:06:56.997 { 00:06:56.997 "params": { 00:06:56.997 "filename": "/dev/zram1", 00:06:56.997 "name": "uring0" 00:06:56.997 }, 00:06:56.997 "method": "bdev_uring_create" 00:06:56.997 }, 00:06:56.997 { 00:06:56.997 "params": { 00:06:56.997 "name": "uring0" 00:06:56.997 }, 00:06:56.997 "method": "bdev_uring_delete" 00:06:56.997 }, 00:06:56.997 { 00:06:56.997 "method": "bdev_wait_for_examine" 00:06:56.997 } 00:06:56.997 ] 00:06:56.997 } 00:06:56.997 ] 00:06:56.997 } 00:06:56.997 [2024-11-04 09:56:29.085877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.997 [2024-11-04 09:56:29.146135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.255 [2024-11-04 09:56:29.202596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.255  [2024-11-04T09:56:29.991Z] Copying: 0/0 [B] (average 0 Bps) 00:06:57.821 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:57.821 09:56:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:57.821 { 00:06:57.821 "subsystems": [ 00:06:57.821 { 00:06:57.821 "subsystem": "bdev", 00:06:57.821 "config": [ 00:06:57.821 { 00:06:57.821 "params": { 00:06:57.821 "block_size": 512, 00:06:57.821 "num_blocks": 1048576, 00:06:57.821 "name": "malloc0" 00:06:57.821 }, 00:06:57.821 "method": "bdev_malloc_create" 00:06:57.821 }, 00:06:57.821 { 00:06:57.821 "params": { 00:06:57.821 "filename": "/dev/zram1", 00:06:57.821 "name": "uring0" 00:06:57.821 }, 00:06:57.821 "method": "bdev_uring_create" 00:06:57.821 }, 00:06:57.821 { 00:06:57.821 "params": { 00:06:57.821 "name": "uring0" 00:06:57.821 }, 00:06:57.821 "method": "bdev_uring_delete" 00:06:57.821 }, 00:06:57.821 { 00:06:57.821 "method": "bdev_wait_for_examine" 00:06:57.821 } 00:06:57.821 ] 00:06:57.821 } 00:06:57.821 ] 00:06:57.821 } 00:06:57.821 [2024-11-04 09:56:29.883709] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:57.821 [2024-11-04 09:56:29.883810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:06:58.079 [2024-11-04 09:56:30.025549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.079 [2024-11-04 09:56:30.080479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.079 [2024-11-04 09:56:30.134551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.338 [2024-11-04 09:56:30.343237] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:58.338 [2024-11-04 09:56:30.343299] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:58.338 [2024-11-04 09:56:30.343327] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:58.338 [2024-11-04 09:56:30.343353] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.597 [2024-11-04 09:56:30.665595] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:58.597 09:56:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:59.166 00:06:59.166 real 0m15.206s 00:06:59.166 user 0m10.340s 00:06:59.166 sys 0m12.632s 00:06:59.167 09:56:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.167 ************************************ 00:06:59.167 END TEST dd_uring_copy 00:06:59.167 09:56:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.167 ************************************ 00:06:59.167 00:06:59.167 real 0m15.435s 00:06:59.167 user 0m10.466s 00:06:59.167 sys 0m12.737s 00:06:59.167 09:56:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.167 09:56:31 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:59.167 ************************************ 00:06:59.167 END TEST spdk_dd_uring 00:06:59.167 ************************************ 00:06:59.167 09:56:31 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:59.167 09:56:31 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.167 09:56:31 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.167 09:56:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:59.167 ************************************ 00:06:59.167 START TEST spdk_dd_sparse 00:06:59.167 ************************************ 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:59.167 * Looking for test storage... 00:06:59.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.167 --rc genhtml_branch_coverage=1 00:06:59.167 --rc genhtml_function_coverage=1 00:06:59.167 --rc genhtml_legend=1 00:06:59.167 --rc geninfo_all_blocks=1 00:06:59.167 --rc geninfo_unexecuted_blocks=1 00:06:59.167 00:06:59.167 ' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.167 --rc genhtml_branch_coverage=1 00:06:59.167 --rc genhtml_function_coverage=1 00:06:59.167 --rc genhtml_legend=1 00:06:59.167 --rc geninfo_all_blocks=1 00:06:59.167 --rc geninfo_unexecuted_blocks=1 00:06:59.167 00:06:59.167 ' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.167 --rc genhtml_branch_coverage=1 00:06:59.167 --rc genhtml_function_coverage=1 00:06:59.167 --rc genhtml_legend=1 00:06:59.167 --rc geninfo_all_blocks=1 00:06:59.167 --rc geninfo_unexecuted_blocks=1 00:06:59.167 00:06:59.167 ' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.167 --rc genhtml_branch_coverage=1 00:06:59.167 --rc genhtml_function_coverage=1 00:06:59.167 --rc genhtml_legend=1 00:06:59.167 --rc geninfo_all_blocks=1 00:06:59.167 --rc geninfo_unexecuted_blocks=1 00:06:59.167 00:06:59.167 ' 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:59.167 1+0 records in 00:06:59.167 1+0 records out 00:06:59.167 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00461924 s, 908 MB/s 00:06:59.167 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:59.426 1+0 records in 00:06:59.426 1+0 records out 00:06:59.426 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00768788 s, 546 MB/s 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:59.426 1+0 records in 00:06:59.426 1+0 records out 00:06:59.426 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00633504 s, 662 MB/s 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:59.426 ************************************ 00:06:59.426 START TEST dd_sparse_file_to_file 00:06:59.426 ************************************ 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:59.426 09:56:31 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:59.426 { 00:06:59.426 "subsystems": [ 00:06:59.426 { 00:06:59.426 "subsystem": "bdev", 00:06:59.426 "config": [ 00:06:59.426 { 00:06:59.426 "params": { 00:06:59.426 "block_size": 4096, 00:06:59.426 "filename": "dd_sparse_aio_disk", 00:06:59.426 "name": "dd_aio" 00:06:59.426 }, 00:06:59.426 "method": "bdev_aio_create" 00:06:59.426 }, 00:06:59.426 { 00:06:59.426 "params": { 00:06:59.426 "lvs_name": "dd_lvstore", 00:06:59.426 "bdev_name": "dd_aio" 00:06:59.426 }, 00:06:59.426 "method": "bdev_lvol_create_lvstore" 00:06:59.426 }, 00:06:59.426 { 00:06:59.426 "method": "bdev_wait_for_examine" 00:06:59.426 } 00:06:59.426 ] 00:06:59.426 } 00:06:59.426 ] 00:06:59.426 } 00:06:59.426 [2024-11-04 09:56:31.421324] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:06:59.426 [2024-11-04 09:56:31.421468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61393 ] 00:06:59.426 [2024-11-04 09:56:31.570854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.683 [2024-11-04 09:56:31.634404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.683 [2024-11-04 09:56:31.692531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.683  [2024-11-04T09:56:32.112Z] Copying: 12/36 [MB] (average 923 MBps) 00:06:59.942 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:59.942 00:06:59.942 real 0m0.692s 00:06:59.942 user 0m0.403s 00:06:59.942 sys 0m0.367s 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:59.942 ************************************ 00:06:59.942 END TEST dd_sparse_file_to_file 00:06:59.942 ************************************ 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:59.942 ************************************ 00:06:59.942 START TEST dd_sparse_file_to_bdev 00:06:59.942 ************************************ 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:59.942 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.200 [2024-11-04 09:56:32.160178] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:00.200 [2024-11-04 09:56:32.160753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61440 ] 00:07:00.200 { 00:07:00.200 "subsystems": [ 00:07:00.200 { 00:07:00.200 "subsystem": "bdev", 00:07:00.200 "config": [ 00:07:00.200 { 00:07:00.200 "params": { 00:07:00.200 "block_size": 4096, 00:07:00.200 "filename": "dd_sparse_aio_disk", 00:07:00.200 "name": "dd_aio" 00:07:00.200 }, 00:07:00.200 "method": "bdev_aio_create" 00:07:00.200 }, 00:07:00.200 { 00:07:00.200 "params": { 00:07:00.200 "lvs_name": "dd_lvstore", 00:07:00.200 "lvol_name": "dd_lvol", 00:07:00.200 "size_in_mib": 36, 00:07:00.200 "thin_provision": true 00:07:00.200 }, 00:07:00.200 "method": "bdev_lvol_create" 00:07:00.200 }, 00:07:00.200 { 00:07:00.200 "method": "bdev_wait_for_examine" 00:07:00.200 } 00:07:00.200 ] 00:07:00.200 } 00:07:00.200 ] 00:07:00.200 } 00:07:00.200 [2024-11-04 09:56:32.313575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.458 [2024-11-04 09:56:32.376748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.458 [2024-11-04 09:56:32.437112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.458  [2024-11-04T09:56:32.887Z] Copying: 12/36 [MB] (average 521 MBps) 00:07:00.717 00:07:00.717 00:07:00.717 real 0m0.670s 00:07:00.717 user 0m0.429s 00:07:00.717 sys 0m0.354s 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.717 ************************************ 00:07:00.717 END TEST dd_sparse_file_to_bdev 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:00.717 ************************************ 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:00.717 ************************************ 00:07:00.717 START TEST dd_sparse_bdev_to_file 00:07:00.717 ************************************ 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:00.717 09:56:32 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:00.717 { 00:07:00.717 "subsystems": [ 00:07:00.717 { 00:07:00.717 "subsystem": "bdev", 00:07:00.717 "config": [ 00:07:00.717 { 00:07:00.717 "params": { 00:07:00.717 "block_size": 4096, 00:07:00.717 "filename": "dd_sparse_aio_disk", 00:07:00.717 "name": "dd_aio" 00:07:00.717 }, 00:07:00.717 "method": "bdev_aio_create" 00:07:00.717 }, 00:07:00.717 { 00:07:00.717 "method": "bdev_wait_for_examine" 00:07:00.717 } 00:07:00.717 ] 00:07:00.717 } 00:07:00.717 ] 00:07:00.717 } 00:07:00.717 [2024-11-04 09:56:32.881257] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:00.717 [2024-11-04 09:56:32.881382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61478 ] 00:07:00.975 [2024-11-04 09:56:33.027428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.975 [2024-11-04 09:56:33.090876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.233 [2024-11-04 09:56:33.146084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.233  [2024-11-04T09:56:33.661Z] Copying: 12/36 [MB] (average 857 MBps) 00:07:01.491 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:01.491 00:07:01.491 real 0m0.653s 00:07:01.491 user 0m0.393s 00:07:01.491 sys 0m0.365s 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:01.491 ************************************ 00:07:01.491 END TEST dd_sparse_bdev_to_file 00:07:01.491 ************************************ 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:01.491 ************************************ 00:07:01.491 END TEST spdk_dd_sparse 00:07:01.491 ************************************ 00:07:01.491 00:07:01.491 real 0m2.409s 00:07:01.491 user 0m1.404s 00:07:01.491 sys 0m1.287s 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.491 09:56:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:01.491 09:56:33 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:01.491 09:56:33 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.491 09:56:33 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.491 09:56:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:01.491 ************************************ 00:07:01.491 START TEST spdk_dd_negative 00:07:01.491 ************************************ 00:07:01.491 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:01.751 * Looking for test storage... 00:07:01.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.751 --rc genhtml_branch_coverage=1 00:07:01.751 --rc genhtml_function_coverage=1 00:07:01.751 --rc genhtml_legend=1 00:07:01.751 --rc geninfo_all_blocks=1 00:07:01.751 --rc geninfo_unexecuted_blocks=1 00:07:01.751 00:07:01.751 ' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.751 --rc genhtml_branch_coverage=1 00:07:01.751 --rc genhtml_function_coverage=1 00:07:01.751 --rc genhtml_legend=1 00:07:01.751 --rc geninfo_all_blocks=1 00:07:01.751 --rc geninfo_unexecuted_blocks=1 00:07:01.751 00:07:01.751 ' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.751 --rc genhtml_branch_coverage=1 00:07:01.751 --rc genhtml_function_coverage=1 00:07:01.751 --rc genhtml_legend=1 00:07:01.751 --rc geninfo_all_blocks=1 00:07:01.751 --rc geninfo_unexecuted_blocks=1 00:07:01.751 00:07:01.751 ' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:01.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.751 --rc genhtml_branch_coverage=1 00:07:01.751 --rc genhtml_function_coverage=1 00:07:01.751 --rc genhtml_legend=1 00:07:01.751 --rc geninfo_all_blocks=1 00:07:01.751 --rc geninfo_unexecuted_blocks=1 00:07:01.751 00:07:01.751 ' 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.751 09:56:33 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.752 ************************************ 00:07:01.752 START TEST dd_invalid_arguments 00:07:01.752 ************************************ 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:01.752 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:01.752 00:07:01.752 CPU options: 00:07:01.752 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:01.752 (like [0,1,10]) 00:07:01.752 --lcores lcore to CPU mapping list. The list is in the format: 00:07:01.752 [<,lcores[@CPUs]>...] 00:07:01.752 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:01.752 Within the group, '-' is used for range separator, 00:07:01.752 ',' is used for single number separator. 00:07:01.752 '( )' can be omitted for single element group, 00:07:01.752 '@' can be omitted if cpus and lcores have the same value 00:07:01.752 --disable-cpumask-locks Disable CPU core lock files. 00:07:01.752 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:01.752 pollers in the app support interrupt mode) 00:07:01.752 -p, --main-core main (primary) core for DPDK 00:07:01.752 00:07:01.752 Configuration options: 00:07:01.752 -c, --config, --json JSON config file 00:07:01.752 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:01.752 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:01.752 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:01.752 --rpcs-allowed comma-separated list of permitted RPCS 00:07:01.752 --json-ignore-init-errors don't exit on invalid config entry 00:07:01.752 00:07:01.752 Memory options: 00:07:01.752 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:01.752 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:01.752 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:01.752 -R, --huge-unlink unlink huge files after initialization 00:07:01.752 -n, --mem-channels number of memory channels used for DPDK 00:07:01.752 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:01.752 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:01.752 --no-huge run without using hugepages 00:07:01.752 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:01.752 -i, --shm-id shared memory ID (optional) 00:07:01.752 -g, --single-file-segments force creating just one hugetlbfs file 00:07:01.752 00:07:01.752 PCI options: 00:07:01.752 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:01.752 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:01.752 -u, --no-pci disable PCI access 00:07:01.752 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:01.752 00:07:01.752 Log options: 00:07:01.752 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:01.752 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:01.752 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:01.752 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:01.752 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:01.752 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:01.752 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:01.752 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:01.752 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:01.752 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:01.752 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:01.752 --silence-noticelog disable notice level logging to stderr 00:07:01.752 00:07:01.752 Trace options: 00:07:01.752 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:01.752 setting 0 to disable trace (default 32768) 00:07:01.752 Tracepoints vary in size and can use more than one trace entry. 00:07:01.752 -e, --tpoint-group [:] 00:07:01.752 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:01.752 [2024-11-04 09:56:33.845725] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:01.752 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:01.752 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:01.752 bdev_raid, scheduler, all). 00:07:01.752 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:01.752 a tracepoint group. First tpoint inside a group can be enabled by 00:07:01.752 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:01.752 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:01.752 in /include/spdk_internal/trace_defs.h 00:07:01.752 00:07:01.752 Other options: 00:07:01.752 -h, --help show this usage 00:07:01.752 -v, --version print SPDK version 00:07:01.752 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:01.752 --env-context Opaque context for use of the env implementation 00:07:01.752 00:07:01.752 Application specific: 00:07:01.752 [--------- DD Options ---------] 00:07:01.752 --if Input file. Must specify either --if or --ib. 00:07:01.752 --ib Input bdev. Must specifier either --if or --ib 00:07:01.752 --of Output file. Must specify either --of or --ob. 00:07:01.752 --ob Output bdev. Must specify either --of or --ob. 00:07:01.752 --iflag Input file flags. 00:07:01.752 --oflag Output file flags. 00:07:01.752 --bs I/O unit size (default: 4096) 00:07:01.752 --qd Queue depth (default: 2) 00:07:01.752 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:01.752 --skip Skip this many I/O units at start of input. (default: 0) 00:07:01.752 --seek Skip this many I/O units at start of output. (default: 0) 00:07:01.752 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:01.752 --sparse Enable hole skipping in input target 00:07:01.752 Available iflag and oflag values: 00:07:01.752 append - append mode 00:07:01.752 direct - use direct I/O for data 00:07:01.752 directory - fail unless a directory 00:07:01.752 dsync - use synchronized I/O for data 00:07:01.752 noatime - do not update access time 00:07:01.752 noctty - do not assign controlling terminal from file 00:07:01.752 nofollow - do not follow symlinks 00:07:01.752 nonblock - use non-blocking I/O 00:07:01.752 sync - use synchronized I/O for data and metadata 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.752 00:07:01.752 real 0m0.081s 00:07:01.752 user 0m0.048s 00:07:01.752 sys 0m0.032s 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:01.752 ************************************ 00:07:01.752 END TEST dd_invalid_arguments 00:07:01.752 ************************************ 00:07:01.752 09:56:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:01.753 ************************************ 00:07:01.753 START TEST dd_double_input 00:07:01.753 ************************************ 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.753 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:02.012 [2024-11-04 09:56:33.974524] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:02.012 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:02.012 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.012 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.012 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.012 00:07:02.012 real 0m0.080s 00:07:02.012 user 0m0.047s 00:07:02.012 sys 0m0.032s 00:07:02.012 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.012 09:56:33 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:02.012 ************************************ 00:07:02.012 END TEST dd_double_input 00:07:02.012 ************************************ 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.012 ************************************ 00:07:02.012 START TEST dd_double_output 00:07:02.012 ************************************ 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:02.012 [2024-11-04 09:56:34.100308] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.012 00:07:02.012 real 0m0.069s 00:07:02.012 user 0m0.042s 00:07:02.012 sys 0m0.027s 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:02.012 ************************************ 00:07:02.012 END TEST dd_double_output 00:07:02.012 ************************************ 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.012 ************************************ 00:07:02.012 START TEST dd_no_input 00:07:02.012 ************************************ 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.012 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:02.271 [2024-11-04 09:56:34.230516] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.271 00:07:02.271 real 0m0.083s 00:07:02.271 user 0m0.044s 00:07:02.271 sys 0m0.038s 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:02.271 ************************************ 00:07:02.271 END TEST dd_no_input 00:07:02.271 ************************************ 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.271 ************************************ 00:07:02.271 START TEST dd_no_output 00:07:02.271 ************************************ 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:02.271 [2024-11-04 09:56:34.357949] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.271 00:07:02.271 real 0m0.079s 00:07:02.271 user 0m0.051s 00:07:02.271 sys 0m0.026s 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:02.271 ************************************ 00:07:02.271 END TEST dd_no_output 00:07:02.271 ************************************ 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.271 ************************************ 00:07:02.271 START TEST dd_wrong_blocksize 00:07:02.271 ************************************ 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.271 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.272 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.272 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.272 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:02.530 [2024-11-04 09:56:34.493164] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.530 00:07:02.530 real 0m0.077s 00:07:02.530 user 0m0.052s 00:07:02.530 sys 0m0.024s 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:02.530 ************************************ 00:07:02.530 END TEST dd_wrong_blocksize 00:07:02.530 ************************************ 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:02.530 ************************************ 00:07:02.530 START TEST dd_smaller_blocksize 00:07:02.530 ************************************ 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.530 09:56:34 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:02.530 [2024-11-04 09:56:34.624694] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:02.530 [2024-11-04 09:56:34.624823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61705 ] 00:07:02.789 [2024-11-04 09:56:34.777243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.789 [2024-11-04 09:56:34.860848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.789 [2024-11-04 09:56:34.921268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.356 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:03.356 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:03.356 [2024-11-04 09:56:35.517744] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:03.356 [2024-11-04 09:56:35.517841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.614 [2024-11-04 09:56:35.638099] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.614 00:07:03.614 real 0m1.138s 00:07:03.614 user 0m0.415s 00:07:03.614 sys 0m0.613s 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:03.614 ************************************ 00:07:03.614 END TEST dd_smaller_blocksize 00:07:03.614 ************************************ 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.614 ************************************ 00:07:03.614 START TEST dd_invalid_count 00:07:03.614 ************************************ 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.614 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:03.874 [2024-11-04 09:56:35.818076] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.874 00:07:03.874 real 0m0.078s 00:07:03.874 user 0m0.051s 00:07:03.874 sys 0m0.026s 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:03.874 ************************************ 00:07:03.874 END TEST dd_invalid_count 00:07:03.874 ************************************ 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.874 ************************************ 00:07:03.874 START TEST dd_invalid_oflag 00:07:03.874 ************************************ 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:03.874 [2024-11-04 09:56:35.939707] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.874 00:07:03.874 real 0m0.067s 00:07:03.874 user 0m0.042s 00:07:03.874 sys 0m0.024s 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.874 ************************************ 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:03.874 END TEST dd_invalid_oflag 00:07:03.874 ************************************ 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.874 09:56:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:03.874 ************************************ 00:07:03.874 START TEST dd_invalid_iflag 00:07:03.874 ************************************ 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.874 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:04.133 [2024-11-04 09:56:36.065493] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.133 00:07:04.133 real 0m0.082s 00:07:04.133 user 0m0.049s 00:07:04.133 sys 0m0.031s 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:04.133 ************************************ 00:07:04.133 END TEST dd_invalid_iflag 00:07:04.133 ************************************ 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.133 ************************************ 00:07:04.133 START TEST dd_unknown_flag 00:07:04.133 ************************************ 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.133 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:04.133 [2024-11-04 09:56:36.194176] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:04.133 [2024-11-04 09:56:36.194280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61802 ] 00:07:04.390 [2024-11-04 09:56:36.340824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.390 [2024-11-04 09:56:36.395339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.390 [2024-11-04 09:56:36.451964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.390 [2024-11-04 09:56:36.489932] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:04.390 [2024-11-04 09:56:36.489979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.390 [2024-11-04 09:56:36.490038] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:04.390 [2024-11-04 09:56:36.490056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.390 [2024-11-04 09:56:36.490258] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:04.390 [2024-11-04 09:56:36.490282] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.390 [2024-11-04 09:56:36.490341] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:04.390 [2024-11-04 09:56:36.490355] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:04.648 [2024-11-04 09:56:36.607971] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.648 00:07:04.648 real 0m0.540s 00:07:04.648 user 0m0.294s 00:07:04.648 sys 0m0.151s 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:04.648 ************************************ 00:07:04.648 END TEST dd_unknown_flag 00:07:04.648 ************************************ 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:04.648 ************************************ 00:07:04.648 START TEST dd_invalid_json 00:07:04.648 ************************************ 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.648 09:56:36 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:04.648 [2024-11-04 09:56:36.785194] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:04.648 [2024-11-04 09:56:36.785295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61831 ] 00:07:04.906 [2024-11-04 09:56:36.931231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.906 [2024-11-04 09:56:36.993836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.906 [2024-11-04 09:56:36.993929] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:04.906 [2024-11-04 09:56:36.993952] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:04.906 [2024-11-04 09:56:36.993964] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.906 [2024-11-04 09:56:36.994006] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.906 00:07:04.906 real 0m0.339s 00:07:04.906 user 0m0.173s 00:07:04.906 sys 0m0.064s 00:07:04.906 ************************************ 00:07:04.906 END TEST dd_invalid_json 00:07:04.906 ************************************ 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.906 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 ************************************ 00:07:05.164 START TEST dd_invalid_seek 00:07:05.164 ************************************ 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.164 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:05.164 { 00:07:05.164 "subsystems": [ 00:07:05.164 { 00:07:05.164 "subsystem": "bdev", 00:07:05.164 "config": [ 00:07:05.164 { 00:07:05.164 "params": { 00:07:05.164 "block_size": 512, 00:07:05.164 "num_blocks": 512, 00:07:05.164 "name": "malloc0" 00:07:05.164 }, 00:07:05.164 "method": "bdev_malloc_create" 00:07:05.164 }, 00:07:05.164 { 00:07:05.164 "params": { 00:07:05.164 "block_size": 512, 00:07:05.164 "num_blocks": 512, 00:07:05.164 "name": "malloc1" 00:07:05.164 }, 00:07:05.164 "method": "bdev_malloc_create" 00:07:05.164 }, 00:07:05.164 { 00:07:05.164 "method": "bdev_wait_for_examine" 00:07:05.164 } 00:07:05.164 ] 00:07:05.164 } 00:07:05.164 ] 00:07:05.164 } 00:07:05.164 [2024-11-04 09:56:37.180413] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:05.164 [2024-11-04 09:56:37.180521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61862 ] 00:07:05.164 [2024-11-04 09:56:37.327037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.455 [2024-11-04 09:56:37.389456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.455 [2024-11-04 09:56:37.445771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.455 [2024-11-04 09:56:37.508308] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:05.455 [2024-11-04 09:56:37.508675] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.713 [2024-11-04 09:56:37.631399] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:07:05.714 ************************************ 00:07:05.714 END TEST dd_invalid_seek 00:07:05.714 ************************************ 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.714 00:07:05.714 real 0m0.573s 00:07:05.714 user 0m0.377s 00:07:05.714 sys 0m0.153s 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:05.714 ************************************ 00:07:05.714 START TEST dd_invalid_skip 00:07:05.714 ************************************ 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.714 09:56:37 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:05.714 [2024-11-04 09:56:37.810625] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:05.714 [2024-11-04 09:56:37.810924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61894 ] 00:07:05.714 { 00:07:05.714 "subsystems": [ 00:07:05.714 { 00:07:05.714 "subsystem": "bdev", 00:07:05.714 "config": [ 00:07:05.714 { 00:07:05.714 "params": { 00:07:05.714 "block_size": 512, 00:07:05.714 "num_blocks": 512, 00:07:05.714 "name": "malloc0" 00:07:05.714 }, 00:07:05.714 "method": "bdev_malloc_create" 00:07:05.714 }, 00:07:05.714 { 00:07:05.714 "params": { 00:07:05.714 "block_size": 512, 00:07:05.714 "num_blocks": 512, 00:07:05.714 "name": "malloc1" 00:07:05.714 }, 00:07:05.714 "method": "bdev_malloc_create" 00:07:05.714 }, 00:07:05.714 { 00:07:05.714 "method": "bdev_wait_for_examine" 00:07:05.714 } 00:07:05.714 ] 00:07:05.714 } 00:07:05.714 ] 00:07:05.714 } 00:07:05.972 [2024-11-04 09:56:37.958053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.972 [2024-11-04 09:56:38.027561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.972 [2024-11-04 09:56:38.085507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.231 [2024-11-04 09:56:38.145816] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:06.231 [2024-11-04 09:56:38.145892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.231 [2024-11-04 09:56:38.261480] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.231 00:07:06.231 real 0m0.582s 00:07:06.231 user 0m0.365s 00:07:06.231 sys 0m0.173s 00:07:06.231 ************************************ 00:07:06.231 END TEST dd_invalid_skip 00:07:06.231 ************************************ 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:06.231 ************************************ 00:07:06.231 START TEST dd_invalid_input_count 00:07:06.231 ************************************ 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.231 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:06.489 { 00:07:06.489 "subsystems": [ 00:07:06.489 { 00:07:06.489 "subsystem": "bdev", 00:07:06.489 "config": [ 00:07:06.489 { 00:07:06.489 "params": { 00:07:06.489 "block_size": 512, 00:07:06.489 "num_blocks": 512, 00:07:06.489 "name": "malloc0" 00:07:06.489 }, 00:07:06.489 "method": "bdev_malloc_create" 00:07:06.489 }, 00:07:06.489 { 00:07:06.489 "params": { 00:07:06.489 "block_size": 512, 00:07:06.489 "num_blocks": 512, 00:07:06.489 "name": "malloc1" 00:07:06.489 }, 00:07:06.489 "method": "bdev_malloc_create" 00:07:06.489 }, 00:07:06.489 { 00:07:06.489 "method": "bdev_wait_for_examine" 00:07:06.489 } 00:07:06.489 ] 00:07:06.489 } 00:07:06.489 ] 00:07:06.489 } 00:07:06.489 [2024-11-04 09:56:38.452528] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:06.489 [2024-11-04 09:56:38.452662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61925 ] 00:07:06.489 [2024-11-04 09:56:38.607861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.747 [2024-11-04 09:56:38.672729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.747 [2024-11-04 09:56:38.729716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.747 [2024-11-04 09:56:38.797210] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:06.747 [2024-11-04 09:56:38.797292] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.006 [2024-11-04 09:56:38.924331] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.006 00:07:07.006 real 0m0.605s 00:07:07.006 user 0m0.391s 00:07:07.006 sys 0m0.166s 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.006 09:56:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:07.006 ************************************ 00:07:07.006 END TEST dd_invalid_input_count 00:07:07.006 ************************************ 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.006 ************************************ 00:07:07.006 START TEST dd_invalid_output_count 00:07:07.006 ************************************ 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.006 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:07.006 { 00:07:07.006 "subsystems": [ 00:07:07.006 { 00:07:07.006 "subsystem": "bdev", 00:07:07.006 "config": [ 00:07:07.006 { 00:07:07.006 "params": { 00:07:07.006 "block_size": 512, 00:07:07.006 "num_blocks": 512, 00:07:07.006 "name": "malloc0" 00:07:07.006 }, 00:07:07.006 "method": "bdev_malloc_create" 00:07:07.006 }, 00:07:07.006 { 00:07:07.006 "method": "bdev_wait_for_examine" 00:07:07.006 } 00:07:07.006 ] 00:07:07.006 } 00:07:07.006 ] 00:07:07.006 } 00:07:07.006 [2024-11-04 09:56:39.116527] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:07.006 [2024-11-04 09:56:39.116666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:07:07.264 [2024-11-04 09:56:39.265722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.264 [2024-11-04 09:56:39.326505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.264 [2024-11-04 09:56:39.382741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.524 [2024-11-04 09:56:39.438972] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:07.524 [2024-11-04 09:56:39.439087] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.524 [2024-11-04 09:56:39.563779] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.524 00:07:07.524 real 0m0.580s 00:07:07.524 user 0m0.380s 00:07:07.524 sys 0m0.159s 00:07:07.524 ************************************ 00:07:07.524 END TEST dd_invalid_output_count 00:07:07.524 ************************************ 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:07.524 ************************************ 00:07:07.524 START TEST dd_bs_not_multiple 00:07:07.524 ************************************ 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:07.524 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.525 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.785 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.785 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.785 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.785 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:07.785 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:07.785 09:56:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:07.785 { 00:07:07.785 "subsystems": [ 00:07:07.785 { 00:07:07.785 "subsystem": "bdev", 00:07:07.785 "config": [ 00:07:07.785 { 00:07:07.785 "params": { 00:07:07.785 "block_size": 512, 00:07:07.785 "num_blocks": 512, 00:07:07.785 "name": "malloc0" 00:07:07.785 }, 00:07:07.785 "method": "bdev_malloc_create" 00:07:07.785 }, 00:07:07.785 { 00:07:07.785 "params": { 00:07:07.785 "block_size": 512, 00:07:07.785 "num_blocks": 512, 00:07:07.785 "name": "malloc1" 00:07:07.785 }, 00:07:07.785 "method": "bdev_malloc_create" 00:07:07.785 }, 00:07:07.785 { 00:07:07.785 "method": "bdev_wait_for_examine" 00:07:07.785 } 00:07:07.785 ] 00:07:07.785 } 00:07:07.785 ] 00:07:07.785 } 00:07:07.785 [2024-11-04 09:56:39.750396] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:07.785 [2024-11-04 09:56:39.750488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61998 ] 00:07:07.785 [2024-11-04 09:56:39.899128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.785 [2024-11-04 09:56:39.951362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.044 [2024-11-04 09:56:40.008927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.044 [2024-11-04 09:56:40.068730] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:08.044 [2024-11-04 09:56:40.068813] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.044 [2024-11-04 09:56:40.193216] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.303 00:07:08.303 real 0m0.568s 00:07:08.303 user 0m0.364s 00:07:08.303 sys 0m0.162s 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:08.303 ************************************ 00:07:08.303 END TEST dd_bs_not_multiple 00:07:08.303 ************************************ 00:07:08.303 00:07:08.303 real 0m6.710s 00:07:08.303 user 0m3.570s 00:07:08.303 sys 0m2.527s 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.303 09:56:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:08.303 ************************************ 00:07:08.303 END TEST spdk_dd_negative 00:07:08.303 ************************************ 00:07:08.303 00:07:08.303 real 1m18.476s 00:07:08.303 user 0m50.309s 00:07:08.303 sys 0m34.495s 00:07:08.303 09:56:40 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.303 09:56:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:08.303 ************************************ 00:07:08.303 END TEST spdk_dd 00:07:08.303 ************************************ 00:07:08.303 09:56:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:08.303 09:56:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.303 09:56:40 -- common/autotest_common.sh@10 -- # set +x 00:07:08.303 09:56:40 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:08.303 09:56:40 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:08.303 09:56:40 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.303 09:56:40 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.303 09:56:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.303 09:56:40 -- common/autotest_common.sh@10 -- # set +x 00:07:08.303 ************************************ 00:07:08.303 START TEST nvmf_tcp 00:07:08.304 ************************************ 00:07:08.304 09:56:40 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.564 * Looking for test storage... 00:07:08.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.564 09:56:40 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.564 --rc genhtml_branch_coverage=1 00:07:08.564 --rc genhtml_function_coverage=1 00:07:08.564 --rc genhtml_legend=1 00:07:08.564 --rc geninfo_all_blocks=1 00:07:08.564 --rc geninfo_unexecuted_blocks=1 00:07:08.564 00:07:08.564 ' 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.564 --rc genhtml_branch_coverage=1 00:07:08.564 --rc genhtml_function_coverage=1 00:07:08.564 --rc genhtml_legend=1 00:07:08.564 --rc geninfo_all_blocks=1 00:07:08.564 --rc geninfo_unexecuted_blocks=1 00:07:08.564 00:07:08.564 ' 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.564 --rc genhtml_branch_coverage=1 00:07:08.564 --rc genhtml_function_coverage=1 00:07:08.564 --rc genhtml_legend=1 00:07:08.564 --rc geninfo_all_blocks=1 00:07:08.564 --rc geninfo_unexecuted_blocks=1 00:07:08.564 00:07:08.564 ' 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.564 --rc genhtml_branch_coverage=1 00:07:08.564 --rc genhtml_function_coverage=1 00:07:08.564 --rc genhtml_legend=1 00:07:08.564 --rc geninfo_all_blocks=1 00:07:08.564 --rc geninfo_unexecuted_blocks=1 00:07:08.564 00:07:08.564 ' 00:07:08.564 09:56:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.564 09:56:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.564 09:56:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.564 09:56:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.564 ************************************ 00:07:08.564 START TEST nvmf_target_core 00:07:08.564 ************************************ 00:07:08.564 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:08.564 * Looking for test storage... 00:07:08.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:08.564 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.564 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.564 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.827 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.828 --rc genhtml_branch_coverage=1 00:07:08.828 --rc genhtml_function_coverage=1 00:07:08.828 --rc genhtml_legend=1 00:07:08.828 --rc geninfo_all_blocks=1 00:07:08.828 --rc geninfo_unexecuted_blocks=1 00:07:08.828 00:07:08.828 ' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.828 --rc genhtml_branch_coverage=1 00:07:08.828 --rc genhtml_function_coverage=1 00:07:08.828 --rc genhtml_legend=1 00:07:08.828 --rc geninfo_all_blocks=1 00:07:08.828 --rc geninfo_unexecuted_blocks=1 00:07:08.828 00:07:08.828 ' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.828 --rc genhtml_branch_coverage=1 00:07:08.828 --rc genhtml_function_coverage=1 00:07:08.828 --rc genhtml_legend=1 00:07:08.828 --rc geninfo_all_blocks=1 00:07:08.828 --rc geninfo_unexecuted_blocks=1 00:07:08.828 00:07:08.828 ' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.828 --rc genhtml_branch_coverage=1 00:07:08.828 --rc genhtml_function_coverage=1 00:07:08.828 --rc genhtml_legend=1 00:07:08.828 --rc geninfo_all_blocks=1 00:07:08.828 --rc geninfo_unexecuted_blocks=1 00:07:08.828 00:07:08.828 ' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.828 ************************************ 00:07:08.828 START TEST nvmf_host_management 00:07:08.828 ************************************ 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.828 * Looking for test storage... 00:07:08.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.828 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.089 --rc genhtml_branch_coverage=1 00:07:09.089 --rc genhtml_function_coverage=1 00:07:09.089 --rc genhtml_legend=1 00:07:09.089 --rc geninfo_all_blocks=1 00:07:09.089 --rc geninfo_unexecuted_blocks=1 00:07:09.089 00:07:09.089 ' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.089 --rc genhtml_branch_coverage=1 00:07:09.089 --rc genhtml_function_coverage=1 00:07:09.089 --rc genhtml_legend=1 00:07:09.089 --rc geninfo_all_blocks=1 00:07:09.089 --rc geninfo_unexecuted_blocks=1 00:07:09.089 00:07:09.089 ' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.089 --rc genhtml_branch_coverage=1 00:07:09.089 --rc genhtml_function_coverage=1 00:07:09.089 --rc genhtml_legend=1 00:07:09.089 --rc geninfo_all_blocks=1 00:07:09.089 --rc geninfo_unexecuted_blocks=1 00:07:09.089 00:07:09.089 ' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.089 --rc genhtml_branch_coverage=1 00:07:09.089 --rc genhtml_function_coverage=1 00:07:09.089 --rc genhtml_legend=1 00:07:09.089 --rc geninfo_all_blocks=1 00:07:09.089 --rc geninfo_unexecuted_blocks=1 00:07:09.089 00:07:09.089 ' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.089 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.090 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:09.090 Cannot find device "nvmf_init_br" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:09.090 Cannot find device "nvmf_init_br2" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:09.090 Cannot find device "nvmf_tgt_br" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:09.090 Cannot find device "nvmf_tgt_br2" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:09.090 Cannot find device "nvmf_init_br" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:09.090 Cannot find device "nvmf_init_br2" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:09.090 Cannot find device "nvmf_tgt_br" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:09.090 Cannot find device "nvmf_tgt_br2" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:09.090 Cannot find device "nvmf_br" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:09.090 Cannot find device "nvmf_init_if" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:09.090 Cannot find device "nvmf_init_if2" 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:09.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:09.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:09.090 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:09.349 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:09.350 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:09.609 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.218 ms 00:07:09.609 00:07:09.609 --- 10.0.0.3 ping statistics --- 00:07:09.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.609 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:09.609 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:09.609 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:07:09.609 00:07:09.609 --- 10.0.0.4 ping statistics --- 00:07:09.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.609 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:09.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:07:09.609 00:07:09.609 --- 10.0.0.1 ping statistics --- 00:07:09.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.609 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:09.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:07:09.609 00:07:09.609 --- 10.0.0.2 ping statistics --- 00:07:09.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.609 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62353 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62353 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62353 ']' 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.609 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:09.609 [2024-11-04 09:56:41.699041] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:09.609 [2024-11-04 09:56:41.699129] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.869 [2024-11-04 09:56:41.854424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.869 [2024-11-04 09:56:41.921502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.869 [2024-11-04 09:56:41.921581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.869 [2024-11-04 09:56:41.921624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.869 [2024-11-04 09:56:41.921635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.869 [2024-11-04 09:56:41.921644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.869 [2024-11-04 09:56:41.922947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.869 [2024-11-04 09:56:41.923095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.869 [2024-11-04 09:56:41.923222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:09.869 [2024-11-04 09:56:41.923223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.869 [2024-11-04 09:56:41.989047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.806 [2024-11-04 09:56:42.799625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.806 Malloc0 00:07:10.806 [2024-11-04 09:56:42.878448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62407 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62407 /var/tmp/bdevperf.sock 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62407 ']' 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:10.806 { 00:07:10.806 "params": { 00:07:10.806 "name": "Nvme$subsystem", 00:07:10.806 "trtype": "$TEST_TRANSPORT", 00:07:10.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:10.806 "adrfam": "ipv4", 00:07:10.806 "trsvcid": "$NVMF_PORT", 00:07:10.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:10.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:10.806 "hdgst": ${hdgst:-false}, 00:07:10.806 "ddgst": ${ddgst:-false} 00:07:10.806 }, 00:07:10.806 "method": "bdev_nvme_attach_controller" 00:07:10.806 } 00:07:10.806 EOF 00:07:10.806 )") 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:10.806 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:10.806 "params": { 00:07:10.806 "name": "Nvme0", 00:07:10.806 "trtype": "tcp", 00:07:10.806 "traddr": "10.0.0.3", 00:07:10.806 "adrfam": "ipv4", 00:07:10.806 "trsvcid": "4420", 00:07:10.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:10.806 "hdgst": false, 00:07:10.806 "ddgst": false 00:07:10.806 }, 00:07:10.806 "method": "bdev_nvme_attach_controller" 00:07:10.806 }' 00:07:11.065 [2024-11-04 09:56:42.990793] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:11.065 [2024-11-04 09:56:42.990931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62407 ] 00:07:11.065 [2024-11-04 09:56:43.155458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.065 [2024-11-04 09:56:43.219493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.323 [2024-11-04 09:56:43.285133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.323 Running I/O for 10 seconds... 00:07:11.921 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.921 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:11.921 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:11.922 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.182 [2024-11-04 09:56:44.109387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.182 [2024-11-04 09:56:44.109437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.109452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.182 [2024-11-04 09:56:44.109463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.109473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.182 [2024-11-04 09:56:44.109483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.109493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.182 [2024-11-04 09:56:44.109502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.109512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x896ce0 is same with the state(6) to be set 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.182 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:12.182 [2024-11-04 09:56:44.130472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.182 [2024-11-04 09:56:44.130812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.182 [2024-11-04 09:56:44.130821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.130985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.130994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.183 [2024-11-04 09:56:44.131692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.183 [2024-11-04 09:56:44.131703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.184 [2024-11-04 09:56:44.131892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.184 [2024-11-04 09:56:44.131902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8912d0 is same with the state(6) to be set 00:07:12.184 [2024-11-04 09:56:44.132024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x896ce0 (9): Bad file descriptor 00:07:12.184 [2024-11-04 09:56:44.133123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:12.184 task offset: 8192 on job bdev=Nvme0n1 fails 00:07:12.184 00:07:12.184 Latency(us) 00:07:12.184 [2024-11-04T09:56:44.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.184 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.184 Job: Nvme0n1 ended in about 0.73 seconds with error 00:07:12.184 Verification LBA range: start 0x0 length 0x400 00:07:12.184 Nvme0n1 : 0.73 1490.29 93.14 87.66 0.00 39435.03 1921.40 45041.11 00:07:12.184 [2024-11-04T09:56:44.354Z] =================================================================================================================== 00:07:12.184 [2024-11-04T09:56:44.354Z] Total : 1490.29 93.14 87.66 0.00 39435.03 1921.40 45041.11 00:07:12.184 [2024-11-04 09:56:44.135063] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.184 [2024-11-04 09:56:44.140904] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62407 00:07:13.121 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62407) - No such process 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:13.121 { 00:07:13.121 "params": { 00:07:13.121 "name": "Nvme$subsystem", 00:07:13.121 "trtype": "$TEST_TRANSPORT", 00:07:13.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:13.121 "adrfam": "ipv4", 00:07:13.121 "trsvcid": "$NVMF_PORT", 00:07:13.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:13.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:13.121 "hdgst": ${hdgst:-false}, 00:07:13.121 "ddgst": ${ddgst:-false} 00:07:13.121 }, 00:07:13.121 "method": "bdev_nvme_attach_controller" 00:07:13.121 } 00:07:13.121 EOF 00:07:13.121 )") 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:13.121 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:13.121 "params": { 00:07:13.121 "name": "Nvme0", 00:07:13.121 "trtype": "tcp", 00:07:13.121 "traddr": "10.0.0.3", 00:07:13.121 "adrfam": "ipv4", 00:07:13.121 "trsvcid": "4420", 00:07:13.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:13.121 "hdgst": false, 00:07:13.121 "ddgst": false 00:07:13.121 }, 00:07:13.121 "method": "bdev_nvme_attach_controller" 00:07:13.121 }' 00:07:13.121 [2024-11-04 09:56:45.178907] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:13.121 [2024-11-04 09:56:45.178988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62445 ] 00:07:13.380 [2024-11-04 09:56:45.321536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.380 [2024-11-04 09:56:45.378915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.380 [2024-11-04 09:56:45.440386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.639 Running I/O for 1 seconds... 00:07:14.576 1536.00 IOPS, 96.00 MiB/s 00:07:14.576 Latency(us) 00:07:14.576 [2024-11-04T09:56:46.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:14.576 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:14.576 Verification LBA range: start 0x0 length 0x400 00:07:14.576 Nvme0n1 : 1.04 1545.59 96.60 0.00 0.00 40599.73 4438.57 38130.04 00:07:14.576 [2024-11-04T09:56:46.746Z] =================================================================================================================== 00:07:14.576 [2024-11-04T09:56:46.746Z] Total : 1545.59 96.60 0.00 0.00 40599.73 4438.57 38130.04 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:14.835 rmmod nvme_tcp 00:07:14.835 rmmod nvme_fabrics 00:07:14.835 rmmod nvme_keyring 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62353 ']' 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62353 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62353 ']' 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62353 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62353 00:07:14.835 killing process with pid 62353 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62353' 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62353 00:07:14.835 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62353 00:07:15.094 [2024-11-04 09:56:47.141972] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:15.095 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:15.359 00:07:15.359 real 0m6.572s 00:07:15.359 user 0m23.920s 00:07:15.359 sys 0m1.718s 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.359 ************************************ 00:07:15.359 END TEST nvmf_host_management 00:07:15.359 ************************************ 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.359 09:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.359 ************************************ 00:07:15.359 START TEST nvmf_lvol 00:07:15.360 ************************************ 00:07:15.360 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:15.643 * Looking for test storage... 00:07:15.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:15.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 --rc geninfo_unexecuted_blocks=1 00:07:15.643 00:07:15.643 ' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:15.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 --rc geninfo_unexecuted_blocks=1 00:07:15.643 00:07:15.643 ' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:15.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 --rc geninfo_unexecuted_blocks=1 00:07:15.643 00:07:15.643 ' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:15.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.643 --rc genhtml_branch_coverage=1 00:07:15.643 --rc genhtml_function_coverage=1 00:07:15.643 --rc genhtml_legend=1 00:07:15.643 --rc geninfo_all_blocks=1 00:07:15.643 --rc geninfo_unexecuted_blocks=1 00:07:15.643 00:07:15.643 ' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.643 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.643 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:15.644 Cannot find device "nvmf_init_br" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:15.644 Cannot find device "nvmf_init_br2" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:15.644 Cannot find device "nvmf_tgt_br" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.644 Cannot find device "nvmf_tgt_br2" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:15.644 Cannot find device "nvmf_init_br" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:15.644 Cannot find device "nvmf_init_br2" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:15.644 Cannot find device "nvmf_tgt_br" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:15.644 Cannot find device "nvmf_tgt_br2" 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:15.644 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:15.903 Cannot find device "nvmf_br" 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:15.903 Cannot find device "nvmf_init_if" 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:15.903 Cannot find device "nvmf_init_if2" 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:15.903 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:15.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:15.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:15.903 00:07:15.903 --- 10.0.0.3 ping statistics --- 00:07:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.903 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:15.903 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:15.903 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:07:15.903 00:07:15.903 --- 10.0.0.4 ping statistics --- 00:07:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.903 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:15.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:07:15.903 00:07:15.903 --- 10.0.0.1 ping statistics --- 00:07:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.903 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:15.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:07:15.903 00:07:15.903 --- 10.0.0.2 ping statistics --- 00:07:15.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.903 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.903 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.904 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62718 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62718 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62718 ']' 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.163 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.163 [2024-11-04 09:56:48.133728] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:16.163 [2024-11-04 09:56:48.133822] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.163 [2024-11-04 09:56:48.283479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.421 [2024-11-04 09:56:48.345565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.421 [2024-11-04 09:56:48.345631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.421 [2024-11-04 09:56:48.345643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.421 [2024-11-04 09:56:48.345661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.421 [2024-11-04 09:56:48.345669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.421 [2024-11-04 09:56:48.346768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.421 [2024-11-04 09:56:48.346847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.421 [2024-11-04 09:56:48.346852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.421 [2024-11-04 09:56:48.402004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.988 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.555 [2024-11-04 09:56:49.499158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.555 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:17.813 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:17.813 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:18.071 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:18.071 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:18.329 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:18.918 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=daae7c63-4fc6-4964-aa01-e2ac4d63fc5f 00:07:18.918 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u daae7c63-4fc6-4964-aa01-e2ac4d63fc5f lvol 20 00:07:18.918 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b7f088ef-001c-436d-872f-59a1fc47faff 00:07:18.918 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.176 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7f088ef-001c-436d-872f-59a1fc47faff 00:07:19.435 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:20.001 [2024-11-04 09:56:51.863381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:20.001 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:20.001 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:20.001 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62799 00:07:20.001 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:21.376 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b7f088ef-001c-436d-872f-59a1fc47faff MY_SNAPSHOT 00:07:21.376 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2bc98c45-8529-406f-a0c4-1a5abbf5a15f 00:07:21.376 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b7f088ef-001c-436d-872f-59a1fc47faff 30 00:07:21.684 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2bc98c45-8529-406f-a0c4-1a5abbf5a15f MY_CLONE 00:07:21.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=de54174a-3b34-43cc-8331-717d55c54d65 00:07:21.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate de54174a-3b34-43cc-8331-717d55c54d65 00:07:22.510 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62799 00:07:30.655 Initializing NVMe Controllers 00:07:30.655 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:30.655 Controller IO queue size 128, less than required. 00:07:30.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.655 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:30.655 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:30.655 Initialization complete. Launching workers. 00:07:30.655 ======================================================== 00:07:30.655 Latency(us) 00:07:30.655 Device Information : IOPS MiB/s Average min max 00:07:30.655 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10653.00 41.61 12015.97 588.23 87447.25 00:07:30.655 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10573.10 41.30 12104.78 3014.12 63355.28 00:07:30.655 ======================================================== 00:07:30.655 Total : 21226.10 82.91 12060.21 588.23 87447.25 00:07:30.655 00:07:30.655 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.655 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b7f088ef-001c-436d-872f-59a1fc47faff 00:07:30.914 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u daae7c63-4fc6-4964-aa01-e2ac4d63fc5f 00:07:31.172 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:31.172 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:31.172 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:31.172 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.172 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.431 rmmod nvme_tcp 00:07:31.431 rmmod nvme_fabrics 00:07:31.431 rmmod nvme_keyring 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62718 ']' 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62718 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62718 ']' 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62718 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62718 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:31.431 killing process with pid 62718 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62718' 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62718 00:07:31.431 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62718 00:07:31.690 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:31.691 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:31.949 00:07:31.949 real 0m16.453s 00:07:31.949 user 1m7.210s 00:07:31.949 sys 0m4.413s 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.949 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.949 ************************************ 00:07:31.949 END TEST nvmf_lvol 00:07:31.949 ************************************ 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.949 ************************************ 00:07:31.949 START TEST nvmf_lvs_grow 00:07:31.949 ************************************ 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.949 * Looking for test storage... 00:07:31.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.949 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:32.209 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.210 --rc genhtml_branch_coverage=1 00:07:32.210 --rc genhtml_function_coverage=1 00:07:32.210 --rc genhtml_legend=1 00:07:32.210 --rc geninfo_all_blocks=1 00:07:32.210 --rc geninfo_unexecuted_blocks=1 00:07:32.210 00:07:32.210 ' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.210 --rc genhtml_branch_coverage=1 00:07:32.210 --rc genhtml_function_coverage=1 00:07:32.210 --rc genhtml_legend=1 00:07:32.210 --rc geninfo_all_blocks=1 00:07:32.210 --rc geninfo_unexecuted_blocks=1 00:07:32.210 00:07:32.210 ' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.210 --rc genhtml_branch_coverage=1 00:07:32.210 --rc genhtml_function_coverage=1 00:07:32.210 --rc genhtml_legend=1 00:07:32.210 --rc geninfo_all_blocks=1 00:07:32.210 --rc geninfo_unexecuted_blocks=1 00:07:32.210 00:07:32.210 ' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.210 --rc genhtml_branch_coverage=1 00:07:32.210 --rc genhtml_function_coverage=1 00:07:32.210 --rc genhtml_legend=1 00:07:32.210 --rc geninfo_all_blocks=1 00:07:32.210 --rc geninfo_unexecuted_blocks=1 00:07:32.210 00:07:32.210 ' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.210 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:32.210 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:32.211 Cannot find device "nvmf_init_br" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:32.211 Cannot find device "nvmf_init_br2" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:32.211 Cannot find device "nvmf_tgt_br" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:32.211 Cannot find device "nvmf_tgt_br2" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:32.211 Cannot find device "nvmf_init_br" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:32.211 Cannot find device "nvmf_init_br2" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:32.211 Cannot find device "nvmf_tgt_br" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:32.211 Cannot find device "nvmf_tgt_br2" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:32.211 Cannot find device "nvmf_br" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:32.211 Cannot find device "nvmf_init_if" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:32.211 Cannot find device "nvmf_init_if2" 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:32.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:32.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:32.211 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:32.471 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:32.471 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:07:32.471 00:07:32.471 --- 10.0.0.3 ping statistics --- 00:07:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.471 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:32.471 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:32.471 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:07:32.471 00:07:32.471 --- 10.0.0.4 ping statistics --- 00:07:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.471 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:32.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:07:32.471 00:07:32.471 --- 10.0.0.1 ping statistics --- 00:07:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.471 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:32.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:07:32.471 00:07:32.471 --- 10.0.0.2 ping statistics --- 00:07:32.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.471 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.471 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63177 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63177 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 63177 ']' 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.472 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.731 [2024-11-04 09:57:04.667405] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:32.731 [2024-11-04 09:57:04.667493] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.731 [2024-11-04 09:57:04.819781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.731 [2024-11-04 09:57:04.879048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.731 [2024-11-04 09:57:04.879114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.731 [2024-11-04 09:57:04.879128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.731 [2024-11-04 09:57:04.879138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.731 [2024-11-04 09:57:04.879147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.731 [2024-11-04 09:57:04.879613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.990 [2024-11-04 09:57:04.938399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.990 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.249 [2024-11-04 09:57:05.328934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.249 ************************************ 00:07:33.249 START TEST lvs_grow_clean 00:07:33.249 ************************************ 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.249 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.816 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:33.816 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:33.816 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:33.816 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:33.816 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:34.074 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.074 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.074 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 lvol 150 00:07:34.640 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4a4ebb95-1fee-40ca-bb1a-88557e49835b 00:07:34.640 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:34.640 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:34.640 [2024-11-04 09:57:06.790470] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:34.640 [2024-11-04 09:57:06.790563] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:34.640 true 00:07:34.899 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:34.899 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:35.158 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:35.158 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.417 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a4ebb95-1fee-40ca-bb1a-88557e49835b 00:07:35.680 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:35.939 [2024-11-04 09:57:07.903142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:35.939 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63258 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63258 /var/tmp/bdevperf.sock 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63258 ']' 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.196 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.196 [2024-11-04 09:57:08.277532] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:36.196 [2024-11-04 09:57:08.277658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63258 ] 00:07:36.455 [2024-11-04 09:57:08.434088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.455 [2024-11-04 09:57:08.502521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.455 [2024-11-04 09:57:08.560813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.394 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.394 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:37.394 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:37.652 Nvme0n1 00:07:37.652 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:37.910 [ 00:07:37.910 { 00:07:37.910 "name": "Nvme0n1", 00:07:37.910 "aliases": [ 00:07:37.910 "4a4ebb95-1fee-40ca-bb1a-88557e49835b" 00:07:37.910 ], 00:07:37.910 "product_name": "NVMe disk", 00:07:37.910 "block_size": 4096, 00:07:37.910 "num_blocks": 38912, 00:07:37.910 "uuid": "4a4ebb95-1fee-40ca-bb1a-88557e49835b", 00:07:37.910 "numa_id": -1, 00:07:37.910 "assigned_rate_limits": { 00:07:37.910 "rw_ios_per_sec": 0, 00:07:37.910 "rw_mbytes_per_sec": 0, 00:07:37.910 "r_mbytes_per_sec": 0, 00:07:37.910 "w_mbytes_per_sec": 0 00:07:37.910 }, 00:07:37.910 "claimed": false, 00:07:37.910 "zoned": false, 00:07:37.910 "supported_io_types": { 00:07:37.910 "read": true, 00:07:37.910 "write": true, 00:07:37.910 "unmap": true, 00:07:37.910 "flush": true, 00:07:37.910 "reset": true, 00:07:37.910 "nvme_admin": true, 00:07:37.910 "nvme_io": true, 00:07:37.910 "nvme_io_md": false, 00:07:37.911 "write_zeroes": true, 00:07:37.911 "zcopy": false, 00:07:37.911 "get_zone_info": false, 00:07:37.911 "zone_management": false, 00:07:37.911 "zone_append": false, 00:07:37.911 "compare": true, 00:07:37.911 "compare_and_write": true, 00:07:37.911 "abort": true, 00:07:37.911 "seek_hole": false, 00:07:37.911 "seek_data": false, 00:07:37.911 "copy": true, 00:07:37.911 "nvme_iov_md": false 00:07:37.911 }, 00:07:37.911 "memory_domains": [ 00:07:37.911 { 00:07:37.911 "dma_device_id": "system", 00:07:37.911 "dma_device_type": 1 00:07:37.911 } 00:07:37.911 ], 00:07:37.911 "driver_specific": { 00:07:37.911 "nvme": [ 00:07:37.911 { 00:07:37.911 "trid": { 00:07:37.911 "trtype": "TCP", 00:07:37.911 "adrfam": "IPv4", 00:07:37.911 "traddr": "10.0.0.3", 00:07:37.911 "trsvcid": "4420", 00:07:37.911 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:37.911 }, 00:07:37.911 "ctrlr_data": { 00:07:37.911 "cntlid": 1, 00:07:37.911 "vendor_id": "0x8086", 00:07:37.911 "model_number": "SPDK bdev Controller", 00:07:37.911 "serial_number": "SPDK0", 00:07:37.911 "firmware_revision": "25.01", 00:07:37.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.911 "oacs": { 00:07:37.911 "security": 0, 00:07:37.911 "format": 0, 00:07:37.911 "firmware": 0, 00:07:37.911 "ns_manage": 0 00:07:37.911 }, 00:07:37.911 "multi_ctrlr": true, 00:07:37.911 "ana_reporting": false 00:07:37.911 }, 00:07:37.911 "vs": { 00:07:37.911 "nvme_version": "1.3" 00:07:37.911 }, 00:07:37.911 "ns_data": { 00:07:37.911 "id": 1, 00:07:37.911 "can_share": true 00:07:37.911 } 00:07:37.911 } 00:07:37.911 ], 00:07:37.911 "mp_policy": "active_passive" 00:07:37.911 } 00:07:37.911 } 00:07:37.911 ] 00:07:37.911 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63281 00:07:37.911 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:37.911 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:38.170 Running I/O for 10 seconds... 00:07:39.174 Latency(us) 00:07:39.174 [2024-11-04T09:57:11.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.175 Nvme0n1 : 1.00 7296.00 28.50 0.00 0.00 0.00 0.00 0.00 00:07:39.175 [2024-11-04T09:57:11.345Z] =================================================================================================================== 00:07:39.175 [2024-11-04T09:57:11.345Z] Total : 7296.00 28.50 0.00 0.00 0.00 0.00 0.00 00:07:39.175 00:07:40.110 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:40.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.110 Nvme0n1 : 2.00 7204.00 28.14 0.00 0.00 0.00 0.00 0.00 00:07:40.110 [2024-11-04T09:57:12.280Z] =================================================================================================================== 00:07:40.110 [2024-11-04T09:57:12.280Z] Total : 7204.00 28.14 0.00 0.00 0.00 0.00 0.00 00:07:40.110 00:07:40.368 true 00:07:40.368 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.368 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:40.666 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.666 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.666 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63281 00:07:41.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.255 Nvme0n1 : 3.00 7173.33 28.02 0.00 0.00 0.00 0.00 0.00 00:07:41.255 [2024-11-04T09:57:13.425Z] =================================================================================================================== 00:07:41.255 [2024-11-04T09:57:13.425Z] Total : 7173.33 28.02 0.00 0.00 0.00 0.00 0.00 00:07:41.255 00:07:42.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.188 Nvme0n1 : 4.00 7094.50 27.71 0.00 0.00 0.00 0.00 0.00 00:07:42.188 [2024-11-04T09:57:14.358Z] =================================================================================================================== 00:07:42.188 [2024-11-04T09:57:14.358Z] Total : 7094.50 27.71 0.00 0.00 0.00 0.00 0.00 00:07:42.188 00:07:43.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.122 Nvme0n1 : 5.00 7072.60 27.63 0.00 0.00 0.00 0.00 0.00 00:07:43.122 [2024-11-04T09:57:15.292Z] =================================================================================================================== 00:07:43.122 [2024-11-04T09:57:15.292Z] Total : 7072.60 27.63 0.00 0.00 0.00 0.00 0.00 00:07:43.122 00:07:44.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.068 Nvme0n1 : 6.00 7016.67 27.41 0.00 0.00 0.00 0.00 0.00 00:07:44.068 [2024-11-04T09:57:16.238Z] =================================================================================================================== 00:07:44.068 [2024-11-04T09:57:16.238Z] Total : 7016.67 27.41 0.00 0.00 0.00 0.00 0.00 00:07:44.068 00:07:45.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.021 Nvme0n1 : 7.00 7066.57 27.60 0.00 0.00 0.00 0.00 0.00 00:07:45.021 [2024-11-04T09:57:17.191Z] =================================================================================================================== 00:07:45.021 [2024-11-04T09:57:17.191Z] Total : 7066.57 27.60 0.00 0.00 0.00 0.00 0.00 00:07:45.021 00:07:46.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.394 Nvme0n1 : 8.00 7088.12 27.69 0.00 0.00 0.00 0.00 0.00 00:07:46.394 [2024-11-04T09:57:18.564Z] =================================================================================================================== 00:07:46.394 [2024-11-04T09:57:18.564Z] Total : 7088.12 27.69 0.00 0.00 0.00 0.00 0.00 00:07:46.394 00:07:47.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.328 Nvme0n1 : 9.00 7076.67 27.64 0.00 0.00 0.00 0.00 0.00 00:07:47.328 [2024-11-04T09:57:19.498Z] =================================================================================================================== 00:07:47.328 [2024-11-04T09:57:19.498Z] Total : 7076.67 27.64 0.00 0.00 0.00 0.00 0.00 00:07:47.328 00:07:48.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.262 Nvme0n1 : 10.00 7067.50 27.61 0.00 0.00 0.00 0.00 0.00 00:07:48.262 [2024-11-04T09:57:20.432Z] =================================================================================================================== 00:07:48.262 [2024-11-04T09:57:20.432Z] Total : 7067.50 27.61 0.00 0.00 0.00 0.00 0.00 00:07:48.262 00:07:48.262 00:07:48.262 Latency(us) 00:07:48.262 [2024-11-04T09:57:20.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.263 Nvme0n1 : 10.00 7077.39 27.65 0.00 0.00 18080.94 8996.31 115819.99 00:07:48.263 [2024-11-04T09:57:20.433Z] =================================================================================================================== 00:07:48.263 [2024-11-04T09:57:20.433Z] Total : 7077.39 27.65 0.00 0.00 18080.94 8996.31 115819.99 00:07:48.263 { 00:07:48.263 "results": [ 00:07:48.263 { 00:07:48.263 "job": "Nvme0n1", 00:07:48.263 "core_mask": "0x2", 00:07:48.263 "workload": "randwrite", 00:07:48.263 "status": "finished", 00:07:48.263 "queue_depth": 128, 00:07:48.263 "io_size": 4096, 00:07:48.263 "runtime": 10.004114, 00:07:48.263 "iops": 7077.388362427697, 00:07:48.263 "mibps": 27.64604829073319, 00:07:48.263 "io_failed": 0, 00:07:48.263 "io_timeout": 0, 00:07:48.263 "avg_latency_us": 18080.944863918197, 00:07:48.263 "min_latency_us": 8996.305454545454, 00:07:48.263 "max_latency_us": 115819.98545454546 00:07:48.263 } 00:07:48.263 ], 00:07:48.263 "core_count": 1 00:07:48.263 } 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63258 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63258 ']' 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63258 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63258 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:48.263 killing process with pid 63258 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63258' 00:07:48.263 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.263 00:07:48.263 Latency(us) 00:07:48.263 [2024-11-04T09:57:20.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.263 [2024-11-04T09:57:20.433Z] =================================================================================================================== 00:07:48.263 [2024-11-04T09:57:20.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63258 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63258 00:07:48.263 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:48.521 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.088 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:49.088 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:49.346 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:49.346 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:49.346 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.605 [2024-11-04 09:57:21.565307] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:49.605 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:49.863 request: 00:07:49.863 { 00:07:49.863 "uuid": "b59d12d8-b33e-42aa-ae23-c0832cad4b71", 00:07:49.863 "method": "bdev_lvol_get_lvstores", 00:07:49.863 "req_id": 1 00:07:49.863 } 00:07:49.863 Got JSON-RPC error response 00:07:49.863 response: 00:07:49.863 { 00:07:49.863 "code": -19, 00:07:49.863 "message": "No such device" 00:07:49.863 } 00:07:49.863 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:49.863 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.863 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.863 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.863 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.122 aio_bdev 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4a4ebb95-1fee-40ca-bb1a-88557e49835b 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=4a4ebb95-1fee-40ca-bb1a-88557e49835b 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:50.122 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.399 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a4ebb95-1fee-40ca-bb1a-88557e49835b -t 2000 00:07:50.663 [ 00:07:50.663 { 00:07:50.663 "name": "4a4ebb95-1fee-40ca-bb1a-88557e49835b", 00:07:50.663 "aliases": [ 00:07:50.663 "lvs/lvol" 00:07:50.663 ], 00:07:50.663 "product_name": "Logical Volume", 00:07:50.663 "block_size": 4096, 00:07:50.663 "num_blocks": 38912, 00:07:50.663 "uuid": "4a4ebb95-1fee-40ca-bb1a-88557e49835b", 00:07:50.663 "assigned_rate_limits": { 00:07:50.663 "rw_ios_per_sec": 0, 00:07:50.663 "rw_mbytes_per_sec": 0, 00:07:50.663 "r_mbytes_per_sec": 0, 00:07:50.663 "w_mbytes_per_sec": 0 00:07:50.663 }, 00:07:50.663 "claimed": false, 00:07:50.663 "zoned": false, 00:07:50.663 "supported_io_types": { 00:07:50.663 "read": true, 00:07:50.663 "write": true, 00:07:50.663 "unmap": true, 00:07:50.663 "flush": false, 00:07:50.663 "reset": true, 00:07:50.663 "nvme_admin": false, 00:07:50.663 "nvme_io": false, 00:07:50.663 "nvme_io_md": false, 00:07:50.663 "write_zeroes": true, 00:07:50.663 "zcopy": false, 00:07:50.663 "get_zone_info": false, 00:07:50.663 "zone_management": false, 00:07:50.663 "zone_append": false, 00:07:50.663 "compare": false, 00:07:50.663 "compare_and_write": false, 00:07:50.663 "abort": false, 00:07:50.663 "seek_hole": true, 00:07:50.663 "seek_data": true, 00:07:50.663 "copy": false, 00:07:50.663 "nvme_iov_md": false 00:07:50.663 }, 00:07:50.663 "driver_specific": { 00:07:50.663 "lvol": { 00:07:50.663 "lvol_store_uuid": "b59d12d8-b33e-42aa-ae23-c0832cad4b71", 00:07:50.663 "base_bdev": "aio_bdev", 00:07:50.663 "thin_provision": false, 00:07:50.663 "num_allocated_clusters": 38, 00:07:50.663 "snapshot": false, 00:07:50.663 "clone": false, 00:07:50.663 "esnap_clone": false 00:07:50.663 } 00:07:50.663 } 00:07:50.663 } 00:07:50.663 ] 00:07:50.663 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:50.663 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:50.663 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:50.922 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:50.922 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:50.922 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:51.489 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:51.489 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a4ebb95-1fee-40ca-bb1a-88557e49835b 00:07:51.489 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b59d12d8-b33e-42aa-ae23-c0832cad4b71 00:07:51.748 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.314 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.573 ************************************ 00:07:52.573 END TEST lvs_grow_clean 00:07:52.573 ************************************ 00:07:52.573 00:07:52.573 real 0m19.279s 00:07:52.573 user 0m18.432s 00:07:52.573 sys 0m2.630s 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.573 ************************************ 00:07:52.573 START TEST lvs_grow_dirty 00:07:52.573 ************************************ 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:52.573 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.141 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:53.141 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:53.464 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0b0da261-d220-427b-bb4c-fc6419407be1 00:07:53.465 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:53.465 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:07:53.723 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:53.723 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:53.723 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b0da261-d220-427b-bb4c-fc6419407be1 lvol 150 00:07:53.982 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=806edb90-3ea1-44a6-afa7-1e658658602c 00:07:53.982 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:53.982 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:54.240 [2024-11-04 09:57:26.177413] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:54.240 [2024-11-04 09:57:26.177534] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:54.240 true 00:07:54.240 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:07:54.240 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:54.498 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:54.498 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.757 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 806edb90-3ea1-44a6-afa7-1e658658602c 00:07:55.016 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:55.275 [2024-11-04 09:57:27.273995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:55.275 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:55.533 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:55.533 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63533 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63533 /var/tmp/bdevperf.sock 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63533 ']' 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.534 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.534 [2024-11-04 09:57:27.606874] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:07:55.534 [2024-11-04 09:57:27.607253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63533 ] 00:07:55.792 [2024-11-04 09:57:27.756271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.792 [2024-11-04 09:57:27.818552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.792 [2024-11-04 09:57:27.874763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.792 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.792 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:55.792 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:56.359 Nvme0n1 00:07:56.359 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:56.359 [ 00:07:56.359 { 00:07:56.359 "name": "Nvme0n1", 00:07:56.359 "aliases": [ 00:07:56.359 "806edb90-3ea1-44a6-afa7-1e658658602c" 00:07:56.359 ], 00:07:56.359 "product_name": "NVMe disk", 00:07:56.359 "block_size": 4096, 00:07:56.359 "num_blocks": 38912, 00:07:56.359 "uuid": "806edb90-3ea1-44a6-afa7-1e658658602c", 00:07:56.359 "numa_id": -1, 00:07:56.359 "assigned_rate_limits": { 00:07:56.359 "rw_ios_per_sec": 0, 00:07:56.359 "rw_mbytes_per_sec": 0, 00:07:56.359 "r_mbytes_per_sec": 0, 00:07:56.359 "w_mbytes_per_sec": 0 00:07:56.359 }, 00:07:56.359 "claimed": false, 00:07:56.359 "zoned": false, 00:07:56.359 "supported_io_types": { 00:07:56.359 "read": true, 00:07:56.359 "write": true, 00:07:56.359 "unmap": true, 00:07:56.359 "flush": true, 00:07:56.359 "reset": true, 00:07:56.359 "nvme_admin": true, 00:07:56.359 "nvme_io": true, 00:07:56.359 "nvme_io_md": false, 00:07:56.359 "write_zeroes": true, 00:07:56.359 "zcopy": false, 00:07:56.359 "get_zone_info": false, 00:07:56.359 "zone_management": false, 00:07:56.359 "zone_append": false, 00:07:56.359 "compare": true, 00:07:56.359 "compare_and_write": true, 00:07:56.359 "abort": true, 00:07:56.359 "seek_hole": false, 00:07:56.359 "seek_data": false, 00:07:56.359 "copy": true, 00:07:56.359 "nvme_iov_md": false 00:07:56.359 }, 00:07:56.359 "memory_domains": [ 00:07:56.359 { 00:07:56.359 "dma_device_id": "system", 00:07:56.359 "dma_device_type": 1 00:07:56.359 } 00:07:56.359 ], 00:07:56.359 "driver_specific": { 00:07:56.359 "nvme": [ 00:07:56.359 { 00:07:56.359 "trid": { 00:07:56.359 "trtype": "TCP", 00:07:56.359 "adrfam": "IPv4", 00:07:56.359 "traddr": "10.0.0.3", 00:07:56.359 "trsvcid": "4420", 00:07:56.359 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:56.359 }, 00:07:56.359 "ctrlr_data": { 00:07:56.359 "cntlid": 1, 00:07:56.359 "vendor_id": "0x8086", 00:07:56.359 "model_number": "SPDK bdev Controller", 00:07:56.359 "serial_number": "SPDK0", 00:07:56.359 "firmware_revision": "25.01", 00:07:56.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.359 "oacs": { 00:07:56.359 "security": 0, 00:07:56.359 "format": 0, 00:07:56.359 "firmware": 0, 00:07:56.359 "ns_manage": 0 00:07:56.359 }, 00:07:56.359 "multi_ctrlr": true, 00:07:56.359 "ana_reporting": false 00:07:56.359 }, 00:07:56.359 "vs": { 00:07:56.359 "nvme_version": "1.3" 00:07:56.359 }, 00:07:56.359 "ns_data": { 00:07:56.359 "id": 1, 00:07:56.359 "can_share": true 00:07:56.359 } 00:07:56.359 } 00:07:56.359 ], 00:07:56.359 "mp_policy": "active_passive" 00:07:56.359 } 00:07:56.359 } 00:07:56.359 ] 00:07:56.359 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63549 00:07:56.359 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:56.360 09:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:56.618 Running I/O for 10 seconds... 00:07:57.554 Latency(us) 00:07:57.554 [2024-11-04T09:57:29.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.554 Nvme0n1 : 1.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:57.554 [2024-11-04T09:57:29.724Z] =================================================================================================================== 00:07:57.554 [2024-11-04T09:57:29.724Z] Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:57.554 00:07:58.489 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:07:58.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.489 Nvme0n1 : 2.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:58.489 [2024-11-04T09:57:30.659Z] =================================================================================================================== 00:07:58.489 [2024-11-04T09:57:30.659Z] Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:07:58.489 00:07:58.763 true 00:07:58.763 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:07:58.763 09:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:59.021 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:59.021 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:59.021 09:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63549 00:07:59.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.588 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:07:59.588 [2024-11-04T09:57:31.758Z] =================================================================================================================== 00:07:59.588 [2024-11-04T09:57:31.758Z] Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:07:59.588 00:08:00.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.524 Nvme0n1 : 4.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:00.524 [2024-11-04T09:57:32.694Z] =================================================================================================================== 00:08:00.524 [2024-11-04T09:57:32.694Z] Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:00.524 00:08:01.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.462 Nvme0n1 : 5.00 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:01.462 [2024-11-04T09:57:33.632Z] =================================================================================================================== 00:08:01.462 [2024-11-04T09:57:33.632Z] Total : 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:01.462 00:08:02.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.838 Nvme0n1 : 6.00 7315.67 28.58 0.00 0.00 0.00 0.00 0.00 00:08:02.838 [2024-11-04T09:57:35.008Z] =================================================================================================================== 00:08:02.838 [2024-11-04T09:57:35.008Z] Total : 7315.67 28.58 0.00 0.00 0.00 0.00 0.00 00:08:02.838 00:08:03.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.775 Nvme0n1 : 7.00 7195.86 28.11 0.00 0.00 0.00 0.00 0.00 00:08:03.775 [2024-11-04T09:57:35.945Z] =================================================================================================================== 00:08:03.775 [2024-11-04T09:57:35.945Z] Total : 7195.86 28.11 0.00 0.00 0.00 0.00 0.00 00:08:03.775 00:08:04.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.711 Nvme0n1 : 8.00 7169.50 28.01 0.00 0.00 0.00 0.00 0.00 00:08:04.711 [2024-11-04T09:57:36.881Z] =================================================================================================================== 00:08:04.711 [2024-11-04T09:57:36.881Z] Total : 7169.50 28.01 0.00 0.00 0.00 0.00 0.00 00:08:04.711 00:08:05.648 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.648 Nvme0n1 : 9.00 7134.89 27.87 0.00 0.00 0.00 0.00 0.00 00:08:05.648 [2024-11-04T09:57:37.818Z] =================================================================================================================== 00:08:05.648 [2024-11-04T09:57:37.818Z] Total : 7134.89 27.87 0.00 0.00 0.00 0.00 0.00 00:08:05.648 00:08:06.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.584 Nvme0n1 : 10.00 7119.90 27.81 0.00 0.00 0.00 0.00 0.00 00:08:06.584 [2024-11-04T09:57:38.754Z] =================================================================================================================== 00:08:06.584 [2024-11-04T09:57:38.754Z] Total : 7119.90 27.81 0.00 0.00 0.00 0.00 0.00 00:08:06.584 00:08:06.584 00:08:06.584 Latency(us) 00:08:06.584 [2024-11-04T09:57:38.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.584 Nvme0n1 : 10.01 7124.03 27.83 0.00 0.00 17962.19 11677.32 131548.63 00:08:06.584 [2024-11-04T09:57:38.754Z] =================================================================================================================== 00:08:06.584 [2024-11-04T09:57:38.754Z] Total : 7124.03 27.83 0.00 0.00 17962.19 11677.32 131548.63 00:08:06.584 { 00:08:06.584 "results": [ 00:08:06.584 { 00:08:06.584 "job": "Nvme0n1", 00:08:06.584 "core_mask": "0x2", 00:08:06.584 "workload": "randwrite", 00:08:06.584 "status": "finished", 00:08:06.584 "queue_depth": 128, 00:08:06.584 "io_size": 4096, 00:08:06.584 "runtime": 10.012168, 00:08:06.584 "iops": 7124.031478496965, 00:08:06.584 "mibps": 27.82824796287877, 00:08:06.584 "io_failed": 0, 00:08:06.584 "io_timeout": 0, 00:08:06.584 "avg_latency_us": 17962.186168338652, 00:08:06.584 "min_latency_us": 11677.323636363637, 00:08:06.584 "max_latency_us": 131548.62545454546 00:08:06.584 } 00:08:06.584 ], 00:08:06.584 "core_count": 1 00:08:06.584 } 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63533 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63533 ']' 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63533 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63533 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63533' 00:08:06.584 killing process with pid 63533 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63533 00:08:06.584 Received shutdown signal, test time was about 10.000000 seconds 00:08:06.584 00:08:06.584 Latency(us) 00:08:06.584 [2024-11-04T09:57:38.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.584 [2024-11-04T09:57:38.754Z] =================================================================================================================== 00:08:06.584 [2024-11-04T09:57:38.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:06.584 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63533 00:08:06.843 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:07.102 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.360 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:07.360 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63177 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63177 00:08:07.929 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63177 Killed "${NVMF_APP[@]}" "$@" 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63687 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63687 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63687 ']' 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.929 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:07.929 [2024-11-04 09:57:39.898920] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:07.929 [2024-11-04 09:57:39.899209] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.929 [2024-11-04 09:57:40.044766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.187 [2024-11-04 09:57:40.097872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.187 [2024-11-04 09:57:40.097935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.187 [2024-11-04 09:57:40.097947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.187 [2024-11-04 09:57:40.097956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.187 [2024-11-04 09:57:40.097964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.187 [2024-11-04 09:57:40.098341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.187 [2024-11-04 09:57:40.152754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.187 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.187 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:08.187 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:08.187 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.187 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.188 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.188 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.446 [2024-11-04 09:57:40.544084] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:08.446 [2024-11-04 09:57:40.544557] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:08.446 [2024-11-04 09:57:40.544979] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 806edb90-3ea1-44a6-afa7-1e658658602c 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=806edb90-3ea1-44a6-afa7-1e658658602c 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.446 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:08.704 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 806edb90-3ea1-44a6-afa7-1e658658602c -t 2000 00:08:09.279 [ 00:08:09.279 { 00:08:09.279 "name": "806edb90-3ea1-44a6-afa7-1e658658602c", 00:08:09.279 "aliases": [ 00:08:09.279 "lvs/lvol" 00:08:09.279 ], 00:08:09.279 "product_name": "Logical Volume", 00:08:09.279 "block_size": 4096, 00:08:09.279 "num_blocks": 38912, 00:08:09.279 "uuid": "806edb90-3ea1-44a6-afa7-1e658658602c", 00:08:09.279 "assigned_rate_limits": { 00:08:09.279 "rw_ios_per_sec": 0, 00:08:09.279 "rw_mbytes_per_sec": 0, 00:08:09.279 "r_mbytes_per_sec": 0, 00:08:09.279 "w_mbytes_per_sec": 0 00:08:09.279 }, 00:08:09.279 "claimed": false, 00:08:09.279 "zoned": false, 00:08:09.279 "supported_io_types": { 00:08:09.279 "read": true, 00:08:09.279 "write": true, 00:08:09.279 "unmap": true, 00:08:09.279 "flush": false, 00:08:09.279 "reset": true, 00:08:09.279 "nvme_admin": false, 00:08:09.279 "nvme_io": false, 00:08:09.279 "nvme_io_md": false, 00:08:09.279 "write_zeroes": true, 00:08:09.279 "zcopy": false, 00:08:09.279 "get_zone_info": false, 00:08:09.279 "zone_management": false, 00:08:09.279 "zone_append": false, 00:08:09.279 "compare": false, 00:08:09.279 "compare_and_write": false, 00:08:09.279 "abort": false, 00:08:09.279 "seek_hole": true, 00:08:09.279 "seek_data": true, 00:08:09.279 "copy": false, 00:08:09.279 "nvme_iov_md": false 00:08:09.279 }, 00:08:09.279 "driver_specific": { 00:08:09.279 "lvol": { 00:08:09.279 "lvol_store_uuid": "0b0da261-d220-427b-bb4c-fc6419407be1", 00:08:09.279 "base_bdev": "aio_bdev", 00:08:09.279 "thin_provision": false, 00:08:09.279 "num_allocated_clusters": 38, 00:08:09.279 "snapshot": false, 00:08:09.279 "clone": false, 00:08:09.279 "esnap_clone": false 00:08:09.279 } 00:08:09.279 } 00:08:09.279 } 00:08:09.279 ] 00:08:09.279 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:09.279 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:09.279 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:09.539 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:09.539 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:09.539 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:09.798 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:09.798 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.056 [2024-11-04 09:57:42.077742] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:10.056 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:10.056 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:10.056 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:10.056 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:10.057 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:10.315 request: 00:08:10.315 { 00:08:10.315 "uuid": "0b0da261-d220-427b-bb4c-fc6419407be1", 00:08:10.315 "method": "bdev_lvol_get_lvstores", 00:08:10.315 "req_id": 1 00:08:10.315 } 00:08:10.315 Got JSON-RPC error response 00:08:10.315 response: 00:08:10.315 { 00:08:10.315 "code": -19, 00:08:10.315 "message": "No such device" 00:08:10.315 } 00:08:10.315 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:10.315 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.315 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.315 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.315 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.578 aio_bdev 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 806edb90-3ea1-44a6-afa7-1e658658602c 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=806edb90-3ea1-44a6-afa7-1e658658602c 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:10.578 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.146 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 806edb90-3ea1-44a6-afa7-1e658658602c -t 2000 00:08:11.146 [ 00:08:11.146 { 00:08:11.146 "name": "806edb90-3ea1-44a6-afa7-1e658658602c", 00:08:11.146 "aliases": [ 00:08:11.146 "lvs/lvol" 00:08:11.146 ], 00:08:11.146 "product_name": "Logical Volume", 00:08:11.146 "block_size": 4096, 00:08:11.146 "num_blocks": 38912, 00:08:11.146 "uuid": "806edb90-3ea1-44a6-afa7-1e658658602c", 00:08:11.146 "assigned_rate_limits": { 00:08:11.146 "rw_ios_per_sec": 0, 00:08:11.146 "rw_mbytes_per_sec": 0, 00:08:11.146 "r_mbytes_per_sec": 0, 00:08:11.146 "w_mbytes_per_sec": 0 00:08:11.146 }, 00:08:11.146 "claimed": false, 00:08:11.146 "zoned": false, 00:08:11.146 "supported_io_types": { 00:08:11.146 "read": true, 00:08:11.146 "write": true, 00:08:11.146 "unmap": true, 00:08:11.146 "flush": false, 00:08:11.146 "reset": true, 00:08:11.146 "nvme_admin": false, 00:08:11.146 "nvme_io": false, 00:08:11.146 "nvme_io_md": false, 00:08:11.146 "write_zeroes": true, 00:08:11.146 "zcopy": false, 00:08:11.146 "get_zone_info": false, 00:08:11.146 "zone_management": false, 00:08:11.146 "zone_append": false, 00:08:11.146 "compare": false, 00:08:11.146 "compare_and_write": false, 00:08:11.146 "abort": false, 00:08:11.146 "seek_hole": true, 00:08:11.146 "seek_data": true, 00:08:11.146 "copy": false, 00:08:11.146 "nvme_iov_md": false 00:08:11.146 }, 00:08:11.146 "driver_specific": { 00:08:11.146 "lvol": { 00:08:11.146 "lvol_store_uuid": "0b0da261-d220-427b-bb4c-fc6419407be1", 00:08:11.146 "base_bdev": "aio_bdev", 00:08:11.146 "thin_provision": false, 00:08:11.146 "num_allocated_clusters": 38, 00:08:11.146 "snapshot": false, 00:08:11.146 "clone": false, 00:08:11.146 "esnap_clone": false 00:08:11.146 } 00:08:11.146 } 00:08:11.146 } 00:08:11.146 ] 00:08:11.146 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:11.146 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:11.146 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:11.715 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:11.715 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:11.715 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:11.973 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:11.973 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 806edb90-3ea1-44a6-afa7-1e658658602c 00:08:12.231 09:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0b0da261-d220-427b-bb4c-fc6419407be1 00:08:12.490 09:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.748 09:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:13.008 ************************************ 00:08:13.008 END TEST lvs_grow_dirty 00:08:13.008 ************************************ 00:08:13.008 00:08:13.008 real 0m20.438s 00:08:13.008 user 0m43.241s 00:08:13.008 sys 0m8.230s 00:08:13.008 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.008 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:13.268 nvmf_trace.0 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.268 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:13.527 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.527 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:13.527 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.527 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.527 rmmod nvme_tcp 00:08:13.527 rmmod nvme_fabrics 00:08:13.786 rmmod nvme_keyring 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63687 ']' 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63687 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63687 ']' 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63687 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63687 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.786 killing process with pid 63687 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63687' 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63687 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63687 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:13.786 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.045 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.045 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:14.045 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:14.045 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:14.045 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:14.045 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:14.045 00:08:14.045 real 0m42.172s 00:08:14.045 user 1m8.332s 00:08:14.045 sys 0m11.924s 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.045 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.045 ************************************ 00:08:14.045 END TEST nvmf_lvs_grow 00:08:14.045 ************************************ 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.344 ************************************ 00:08:14.344 START TEST nvmf_bdev_io_wait 00:08:14.344 ************************************ 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:14.344 * Looking for test storage... 00:08:14.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.344 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:14.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.344 --rc genhtml_branch_coverage=1 00:08:14.345 --rc genhtml_function_coverage=1 00:08:14.345 --rc genhtml_legend=1 00:08:14.345 --rc geninfo_all_blocks=1 00:08:14.345 --rc geninfo_unexecuted_blocks=1 00:08:14.345 00:08:14.345 ' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.345 --rc genhtml_branch_coverage=1 00:08:14.345 --rc genhtml_function_coverage=1 00:08:14.345 --rc genhtml_legend=1 00:08:14.345 --rc geninfo_all_blocks=1 00:08:14.345 --rc geninfo_unexecuted_blocks=1 00:08:14.345 00:08:14.345 ' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.345 --rc genhtml_branch_coverage=1 00:08:14.345 --rc genhtml_function_coverage=1 00:08:14.345 --rc genhtml_legend=1 00:08:14.345 --rc geninfo_all_blocks=1 00:08:14.345 --rc geninfo_unexecuted_blocks=1 00:08:14.345 00:08:14.345 ' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:14.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.345 --rc genhtml_branch_coverage=1 00:08:14.345 --rc genhtml_function_coverage=1 00:08:14.345 --rc genhtml_legend=1 00:08:14.345 --rc geninfo_all_blocks=1 00:08:14.345 --rc geninfo_unexecuted_blocks=1 00:08:14.345 00:08:14.345 ' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:14.345 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.345 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:14.346 Cannot find device "nvmf_init_br" 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:14.346 Cannot find device "nvmf_init_br2" 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:14.346 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:14.605 Cannot find device "nvmf_tgt_br" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.605 Cannot find device "nvmf_tgt_br2" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:14.605 Cannot find device "nvmf_init_br" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:14.605 Cannot find device "nvmf_init_br2" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:14.605 Cannot find device "nvmf_tgt_br" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:14.605 Cannot find device "nvmf_tgt_br2" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:14.605 Cannot find device "nvmf_br" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:14.605 Cannot find device "nvmf_init_if" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:14.605 Cannot find device "nvmf_init_if2" 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:14.605 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:14.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:14.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:08:14.864 00:08:14.864 --- 10.0.0.3 ping statistics --- 00:08:14.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.864 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:14.864 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:14.864 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:08:14.864 00:08:14.864 --- 10.0.0.4 ping statistics --- 00:08:14.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.864 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:14.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:14.864 00:08:14.864 --- 10.0.0.1 ping statistics --- 00:08:14.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.864 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:14.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:08:14.864 00:08:14.864 --- 10.0.0.2 ping statistics --- 00:08:14.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.864 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64054 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64054 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 64054 ']' 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.864 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.864 [2024-11-04 09:57:46.986165] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:14.864 [2024-11-04 09:57:46.986276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.123 [2024-11-04 09:57:47.140198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.123 [2024-11-04 09:57:47.211627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.123 [2024-11-04 09:57:47.211746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.123 [2024-11-04 09:57:47.211761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.123 [2024-11-04 09:57:47.211772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.123 [2024-11-04 09:57:47.211780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.123 [2024-11-04 09:57:47.213016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.123 [2024-11-04 09:57:47.213114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.123 [2024-11-04 09:57:47.213265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.123 [2024-11-04 09:57:47.213272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.059 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.059 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:16.059 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.059 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.059 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.059 [2024-11-04 09:57:48.079274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.059 [2024-11-04 09:57:48.091518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.059 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.060 Malloc0 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.060 [2024-11-04 09:57:48.146351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64095 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.060 { 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme$subsystem", 00:08:16.060 "trtype": "$TEST_TRANSPORT", 00:08:16.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "$NVMF_PORT", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.060 "hdgst": ${hdgst:-false}, 00:08:16.060 "ddgst": ${ddgst:-false} 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 } 00:08:16.060 EOF 00:08:16.060 )") 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64097 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64099 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.060 { 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme$subsystem", 00:08:16.060 "trtype": "$TEST_TRANSPORT", 00:08:16.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "$NVMF_PORT", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.060 "hdgst": ${hdgst:-false}, 00:08:16.060 "ddgst": ${ddgst:-false} 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 } 00:08:16.060 EOF 00:08:16.060 )") 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64102 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.060 { 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme$subsystem", 00:08:16.060 "trtype": "$TEST_TRANSPORT", 00:08:16.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "$NVMF_PORT", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.060 "hdgst": ${hdgst:-false}, 00:08:16.060 "ddgst": ${ddgst:-false} 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 } 00:08:16.060 EOF 00:08:16.060 )") 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.060 { 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme$subsystem", 00:08:16.060 "trtype": "$TEST_TRANSPORT", 00:08:16.060 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "$NVMF_PORT", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.060 "hdgst": ${hdgst:-false}, 00:08:16.060 "ddgst": ${ddgst:-false} 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 } 00:08:16.060 EOF 00:08:16.060 )") 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme1", 00:08:16.060 "trtype": "tcp", 00:08:16.060 "traddr": "10.0.0.3", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "4420", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.060 "hdgst": false, 00:08:16.060 "ddgst": false 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 }' 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme1", 00:08:16.060 "trtype": "tcp", 00:08:16.060 "traddr": "10.0.0.3", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "4420", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.060 "hdgst": false, 00:08:16.060 "ddgst": false 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 }' 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme1", 00:08:16.060 "trtype": "tcp", 00:08:16.060 "traddr": "10.0.0.3", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "4420", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.060 "hdgst": false, 00:08:16.060 "ddgst": false 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 }' 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.060 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.060 "params": { 00:08:16.060 "name": "Nvme1", 00:08:16.060 "trtype": "tcp", 00:08:16.060 "traddr": "10.0.0.3", 00:08:16.060 "adrfam": "ipv4", 00:08:16.060 "trsvcid": "4420", 00:08:16.060 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.060 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.060 "hdgst": false, 00:08:16.060 "ddgst": false 00:08:16.060 }, 00:08:16.060 "method": "bdev_nvme_attach_controller" 00:08:16.060 }' 00:08:16.060 [2024-11-04 09:57:48.217394] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:16.060 [2024-11-04 09:57:48.217610] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:16.060 [2024-11-04 09:57:48.218804] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:16.060 [2024-11-04 09:57:48.218888] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:16.319 [2024-11-04 09:57:48.234884] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:16.319 [2024-11-04 09:57:48.234967] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:16.319 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64095 00:08:16.319 [2024-11-04 09:57:48.259771] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:16.319 [2024-11-04 09:57:48.259858] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:16.319 [2024-11-04 09:57:48.436205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.577 [2024-11-04 09:57:48.492637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:16.577 [2024-11-04 09:57:48.502023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.577 [2024-11-04 09:57:48.506609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.577 [2024-11-04 09:57:48.550450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:16.577 [2024-11-04 09:57:48.563126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.577 [2024-11-04 09:57:48.571654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.577 [2024-11-04 09:57:48.620903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:16.577 [2024-11-04 09:57:48.633738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.578 Running I/O for 1 seconds... 00:08:16.578 [2024-11-04 09:57:48.640861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.578 Running I/O for 1 seconds... 00:08:16.578 [2024-11-04 09:57:48.698789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:16.578 [2024-11-04 09:57:48.712284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.836 Running I/O for 1 seconds... 00:08:16.836 Running I/O for 1 seconds... 00:08:17.771 6917.00 IOPS, 27.02 MiB/s 00:08:17.771 Latency(us) 00:08:17.771 [2024-11-04T09:57:49.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.771 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:17.771 Nvme1n1 : 1.02 6939.63 27.11 0.00 0.00 18326.58 5153.51 34078.72 00:08:17.771 [2024-11-04T09:57:49.941Z] =================================================================================================================== 00:08:17.771 [2024-11-04T09:57:49.941Z] Total : 6939.63 27.11 0.00 0.00 18326.58 5153.51 34078.72 00:08:17.771 174480.00 IOPS, 681.56 MiB/s 00:08:17.771 Latency(us) 00:08:17.771 [2024-11-04T09:57:49.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.771 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:17.771 Nvme1n1 : 1.00 174127.56 680.19 0.00 0.00 731.34 392.84 1980.97 00:08:17.771 [2024-11-04T09:57:49.941Z] =================================================================================================================== 00:08:17.771 [2024-11-04T09:57:49.941Z] Total : 174127.56 680.19 0.00 0.00 731.34 392.84 1980.97 00:08:17.771 7777.00 IOPS, 30.38 MiB/s 00:08:17.771 Latency(us) 00:08:17.771 [2024-11-04T09:57:49.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.771 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:17.771 Nvme1n1 : 1.01 7816.84 30.53 0.00 0.00 16276.80 9234.62 25737.77 00:08:17.771 [2024-11-04T09:57:49.941Z] =================================================================================================================== 00:08:17.771 [2024-11-04T09:57:49.941Z] Total : 7816.84 30.53 0.00 0.00 16276.80 9234.62 25737.77 00:08:17.771 09:57:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64097 00:08:17.771 6909.00 IOPS, 26.99 MiB/s 00:08:17.771 Latency(us) 00:08:17.771 [2024-11-04T09:57:49.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.771 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:17.771 Nvme1n1 : 1.01 7042.04 27.51 0.00 0.00 18120.34 5332.25 45279.42 00:08:17.771 [2024-11-04T09:57:49.941Z] =================================================================================================================== 00:08:17.771 [2024-11-04T09:57:49.941Z] Total : 7042.04 27.51 0.00 0.00 18120.34 5332.25 45279.42 00:08:17.771 09:57:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64099 00:08:17.771 09:57:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64102 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.029 rmmod nvme_tcp 00:08:18.029 rmmod nvme_fabrics 00:08:18.029 rmmod nvme_keyring 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64054 ']' 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64054 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 64054 ']' 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 64054 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64054 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64054' 00:08:18.029 killing process with pid 64054 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 64054 00:08:18.029 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 64054 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:18.287 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:18.545 00:08:18.545 real 0m4.284s 00:08:18.545 user 0m17.165s 00:08:18.545 sys 0m2.192s 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.545 ************************************ 00:08:18.545 END TEST nvmf_bdev_io_wait 00:08:18.545 ************************************ 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.545 ************************************ 00:08:18.545 START TEST nvmf_queue_depth 00:08:18.545 ************************************ 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:18.545 * Looking for test storage... 00:08:18.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:18.545 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:18.805 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:18.805 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.805 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.806 --rc genhtml_branch_coverage=1 00:08:18.806 --rc genhtml_function_coverage=1 00:08:18.806 --rc genhtml_legend=1 00:08:18.806 --rc geninfo_all_blocks=1 00:08:18.806 --rc geninfo_unexecuted_blocks=1 00:08:18.806 00:08:18.806 ' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.806 --rc genhtml_branch_coverage=1 00:08:18.806 --rc genhtml_function_coverage=1 00:08:18.806 --rc genhtml_legend=1 00:08:18.806 --rc geninfo_all_blocks=1 00:08:18.806 --rc geninfo_unexecuted_blocks=1 00:08:18.806 00:08:18.806 ' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.806 --rc genhtml_branch_coverage=1 00:08:18.806 --rc genhtml_function_coverage=1 00:08:18.806 --rc genhtml_legend=1 00:08:18.806 --rc geninfo_all_blocks=1 00:08:18.806 --rc geninfo_unexecuted_blocks=1 00:08:18.806 00:08:18.806 ' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:18.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.806 --rc genhtml_branch_coverage=1 00:08:18.806 --rc genhtml_function_coverage=1 00:08:18.806 --rc genhtml_legend=1 00:08:18.806 --rc geninfo_all_blocks=1 00:08:18.806 --rc geninfo_unexecuted_blocks=1 00:08:18.806 00:08:18.806 ' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.806 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:18.807 Cannot find device "nvmf_init_br" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:18.807 Cannot find device "nvmf_init_br2" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:18.807 Cannot find device "nvmf_tgt_br" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:18.807 Cannot find device "nvmf_tgt_br2" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:18.807 Cannot find device "nvmf_init_br" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:18.807 Cannot find device "nvmf_init_br2" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:18.807 Cannot find device "nvmf_tgt_br" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:18.807 Cannot find device "nvmf_tgt_br2" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:18.807 Cannot find device "nvmf_br" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:18.807 Cannot find device "nvmf_init_if" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:18.807 Cannot find device "nvmf_init_if2" 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:18.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:18.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:18.807 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:19.066 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:19.066 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:19.067 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:19.067 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:19.067 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:19.067 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:19.067 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:08:19.067 00:08:19.067 --- 10.0.0.3 ping statistics --- 00:08:19.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.067 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:19.067 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:19.067 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:08:19.067 00:08:19.067 --- 10.0.0.4 ping statistics --- 00:08:19.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.067 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:19.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:19.067 00:08:19.067 --- 10.0.0.1 ping statistics --- 00:08:19.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.067 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:19.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:08:19.067 00:08:19.067 --- 10.0.0.2 ping statistics --- 00:08:19.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.067 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64379 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64379 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64379 ']' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.067 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.067 [2024-11-04 09:57:51.172610] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:19.067 [2024-11-04 09:57:51.172746] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.326 [2024-11-04 09:57:51.318156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.326 [2024-11-04 09:57:51.378735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.326 [2024-11-04 09:57:51.378819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.326 [2024-11-04 09:57:51.378844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.326 [2024-11-04 09:57:51.378870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.326 [2024-11-04 09:57:51.378878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.326 [2024-11-04 09:57:51.379313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.326 [2024-11-04 09:57:51.436837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.585 [2024-11-04 09:57:51.557474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.585 Malloc0 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.585 [2024-11-04 09:57:51.613692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64409 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64409 /var/tmp/bdevperf.sock 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64409 ']' 00:08:19.585 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:19.586 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:19.586 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:19.586 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.586 09:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:19.586 [2024-11-04 09:57:51.675738] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:19.586 [2024-11-04 09:57:51.675833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64409 ] 00:08:19.845 [2024-11-04 09:57:51.826196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.845 [2024-11-04 09:57:51.885003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.845 [2024-11-04 09:57:51.941119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:20.104 NVMe0n1 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.104 09:57:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.104 Running I/O for 10 seconds... 00:08:22.415 6154.00 IOPS, 24.04 MiB/s [2024-11-04T09:57:55.522Z] 6874.00 IOPS, 26.85 MiB/s [2024-11-04T09:57:56.458Z] 7311.67 IOPS, 28.56 MiB/s [2024-11-04T09:57:57.395Z] 7449.75 IOPS, 29.10 MiB/s [2024-11-04T09:57:58.380Z] 7607.60 IOPS, 29.72 MiB/s [2024-11-04T09:57:59.325Z] 7779.33 IOPS, 30.39 MiB/s [2024-11-04T09:58:00.262Z] 7913.14 IOPS, 30.91 MiB/s [2024-11-04T09:58:01.639Z] 7989.75 IOPS, 31.21 MiB/s [2024-11-04T09:58:02.574Z] 8017.33 IOPS, 31.32 MiB/s [2024-11-04T09:58:02.574Z] 8064.00 IOPS, 31.50 MiB/s 00:08:30.404 Latency(us) 00:08:30.404 [2024-11-04T09:58:02.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.404 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:30.404 Verification LBA range: start 0x0 length 0x4000 00:08:30.404 NVMe0n1 : 10.09 8089.83 31.60 0.00 0.00 125898.42 26571.87 99138.09 00:08:30.404 [2024-11-04T09:58:02.574Z] =================================================================================================================== 00:08:30.404 [2024-11-04T09:58:02.574Z] Total : 8089.83 31.60 0.00 0.00 125898.42 26571.87 99138.09 00:08:30.404 { 00:08:30.404 "results": [ 00:08:30.404 { 00:08:30.404 "job": "NVMe0n1", 00:08:30.404 "core_mask": "0x1", 00:08:30.404 "workload": "verify", 00:08:30.404 "status": "finished", 00:08:30.404 "verify_range": { 00:08:30.404 "start": 0, 00:08:30.404 "length": 16384 00:08:30.404 }, 00:08:30.404 "queue_depth": 1024, 00:08:30.404 "io_size": 4096, 00:08:30.404 "runtime": 10.094646, 00:08:30.404 "iops": 8089.832966901465, 00:08:30.404 "mibps": 31.600910026958847, 00:08:30.404 "io_failed": 0, 00:08:30.404 "io_timeout": 0, 00:08:30.404 "avg_latency_us": 125898.42384724993, 00:08:30.404 "min_latency_us": 26571.86909090909, 00:08:30.404 "max_latency_us": 99138.09454545454 00:08:30.404 } 00:08:30.404 ], 00:08:30.404 "core_count": 1 00:08:30.404 } 00:08:30.404 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64409 00:08:30.404 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64409 ']' 00:08:30.404 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64409 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64409 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:30.405 killing process with pid 64409 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64409' 00:08:30.405 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.405 00:08:30.405 Latency(us) 00:08:30.405 [2024-11-04T09:58:02.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.405 [2024-11-04T09:58:02.575Z] =================================================================================================================== 00:08:30.405 [2024-11-04T09:58:02.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64409 00:08:30.405 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64409 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:30.664 rmmod nvme_tcp 00:08:30.664 rmmod nvme_fabrics 00:08:30.664 rmmod nvme_keyring 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64379 ']' 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64379 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64379 ']' 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64379 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64379 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:30.664 killing process with pid 64379 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64379' 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64379 00:08:30.664 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64379 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:30.923 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:30.923 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:30.923 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:30.923 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:30.923 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:30.923 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:30.923 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:31.182 00:08:31.182 real 0m12.663s 00:08:31.182 user 0m21.756s 00:08:31.182 sys 0m2.137s 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:31.182 ************************************ 00:08:31.182 END TEST nvmf_queue_depth 00:08:31.182 ************************************ 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.182 ************************************ 00:08:31.182 START TEST nvmf_target_multipath 00:08:31.182 ************************************ 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:31.182 * Looking for test storage... 00:08:31.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:31.182 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:31.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.443 --rc genhtml_branch_coverage=1 00:08:31.443 --rc genhtml_function_coverage=1 00:08:31.443 --rc genhtml_legend=1 00:08:31.443 --rc geninfo_all_blocks=1 00:08:31.443 --rc geninfo_unexecuted_blocks=1 00:08:31.443 00:08:31.443 ' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:31.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.443 --rc genhtml_branch_coverage=1 00:08:31.443 --rc genhtml_function_coverage=1 00:08:31.443 --rc genhtml_legend=1 00:08:31.443 --rc geninfo_all_blocks=1 00:08:31.443 --rc geninfo_unexecuted_blocks=1 00:08:31.443 00:08:31.443 ' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:31.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.443 --rc genhtml_branch_coverage=1 00:08:31.443 --rc genhtml_function_coverage=1 00:08:31.443 --rc genhtml_legend=1 00:08:31.443 --rc geninfo_all_blocks=1 00:08:31.443 --rc geninfo_unexecuted_blocks=1 00:08:31.443 00:08:31.443 ' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:31.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.443 --rc genhtml_branch_coverage=1 00:08:31.443 --rc genhtml_function_coverage=1 00:08:31.443 --rc genhtml_legend=1 00:08:31.443 --rc geninfo_all_blocks=1 00:08:31.443 --rc geninfo_unexecuted_blocks=1 00:08:31.443 00:08:31.443 ' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.443 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.444 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:31.444 Cannot find device "nvmf_init_br" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:31.444 Cannot find device "nvmf_init_br2" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:31.444 Cannot find device "nvmf_tgt_br" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.444 Cannot find device "nvmf_tgt_br2" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:31.444 Cannot find device "nvmf_init_br" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:31.444 Cannot find device "nvmf_init_br2" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:31.444 Cannot find device "nvmf_tgt_br" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:31.444 Cannot find device "nvmf_tgt_br2" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:31.444 Cannot find device "nvmf_br" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:31.444 Cannot find device "nvmf_init_if" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:31.444 Cannot find device "nvmf_init_if2" 00:08:31.444 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:31.445 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.445 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:31.445 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:31.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:31.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:08:31.725 00:08:31.725 --- 10.0.0.3 ping statistics --- 00:08:31.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.725 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:31.725 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:31.725 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:08:31.725 00:08:31.725 --- 10.0.0.4 ping statistics --- 00:08:31.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.725 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:31.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:31.725 00:08:31.725 --- 10.0.0.1 ping statistics --- 00:08:31.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.725 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:31.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:08:31.725 00:08:31.725 --- 10.0.0.2 ping statistics --- 00:08:31.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.725 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.725 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64772 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64772 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64772 ']' 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.985 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:31.985 [2024-11-04 09:58:03.960248] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:31.985 [2024-11-04 09:58:03.960349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.985 [2024-11-04 09:58:04.112786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.244 [2024-11-04 09:58:04.179668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.244 [2024-11-04 09:58:04.179946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.244 [2024-11-04 09:58:04.180094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.244 [2024-11-04 09:58:04.180282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.244 [2024-11-04 09:58:04.180419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.244 [2024-11-04 09:58:04.181680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.244 [2024-11-04 09:58:04.181745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.244 [2024-11-04 09:58:04.181800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.244 [2024-11-04 09:58:04.181807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.244 [2024-11-04 09:58:04.255636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.179 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.179 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:08:33.179 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.179 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.179 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:33.179 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.179 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:33.179 [2024-11-04 09:58:05.272440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.179 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:33.747 Malloc0 00:08:33.747 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:33.747 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:34.314 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:34.314 [2024-11-04 09:58:06.431101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:34.314 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:34.572 [2024-11-04 09:58:06.715338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:34.572 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:34.831 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:34.831 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.831 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:08:34.831 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.831 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:34.831 09:58:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64867 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:37.365 09:58:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:37.365 [global] 00:08:37.365 thread=1 00:08:37.365 invalidate=1 00:08:37.365 rw=randrw 00:08:37.365 time_based=1 00:08:37.365 runtime=6 00:08:37.365 ioengine=libaio 00:08:37.365 direct=1 00:08:37.365 bs=4096 00:08:37.365 iodepth=128 00:08:37.365 norandommap=0 00:08:37.365 numjobs=1 00:08:37.365 00:08:37.365 verify_dump=1 00:08:37.365 verify_backlog=512 00:08:37.365 verify_state_save=0 00:08:37.365 do_verify=1 00:08:37.365 verify=crc32c-intel 00:08:37.365 [job0] 00:08:37.365 filename=/dev/nvme0n1 00:08:37.365 Could not set queue depth (nvme0n1) 00:08:37.365 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:37.365 fio-3.35 00:08:37.365 Starting 1 thread 00:08:37.933 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:38.191 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:38.758 09:58:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:39.016 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64867 00:08:44.283 00:08:44.283 job0: (groupid=0, jobs=1): err= 0: pid=64888: Mon Nov 4 09:58:15 2024 00:08:44.283 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(244MiB/6005msec) 00:08:44.283 slat (usec): min=3, max=6095, avg=56.61, stdev=227.42 00:08:44.283 clat (usec): min=1243, max=14877, avg=8333.49, stdev=1473.95 00:08:44.283 lat (usec): min=1272, max=14888, avg=8390.10, stdev=1477.80 00:08:44.283 clat percentiles (usec): 00:08:44.283 | 1.00th=[ 4424], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 7570], 00:08:44.283 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:08:44.283 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11994], 00:08:44.283 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14222], 99.95th=[14484], 00:08:44.283 | 99.99th=[14877] 00:08:44.283 bw ( KiB/s): min=10640, max=26640, per=52.88%, avg=21989.73, stdev=4871.81, samples=11 00:08:44.283 iops : min= 2660, max= 6660, avg=5497.36, stdev=1217.92, samples=11 00:08:44.283 write: IOPS=6164, BW=24.1MiB/s (25.2MB/s)(131MiB/5429msec); 0 zone resets 00:08:44.283 slat (usec): min=14, max=1835, avg=64.17, stdev=158.99 00:08:44.283 clat (usec): min=1447, max=14388, avg=7220.94, stdev=1275.12 00:08:44.283 lat (usec): min=1472, max=14410, avg=7285.11, stdev=1279.93 00:08:44.283 clat percentiles (usec): 00:08:44.283 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 5669], 20.00th=[ 6718], 00:08:44.283 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:08:44.283 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8455], 00:08:44.283 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13042], 99.95th=[13435], 00:08:44.283 | 99.99th=[14091] 00:08:44.283 bw ( KiB/s): min=11048, max=26112, per=89.14%, avg=21981.18, stdev=4591.89, samples=11 00:08:44.283 iops : min= 2762, max= 6528, avg=5495.27, stdev=1147.96, samples=11 00:08:44.283 lat (msec) : 2=0.03%, 4=1.50%, 10=92.83%, 20=5.64% 00:08:44.283 cpu : usr=5.43%, sys=20.85%, ctx=5484, majf=0, minf=78 00:08:44.283 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:44.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:44.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:44.283 issued rwts: total=62428,33467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:44.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:44.283 00:08:44.283 Run status group 0 (all jobs): 00:08:44.283 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=244MiB (256MB), run=6005-6005msec 00:08:44.283 WRITE: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=131MiB (137MB), run=5429-5429msec 00:08:44.283 00:08:44.283 Disk stats (read/write): 00:08:44.283 nvme0n1: ios=61756/32619, merge=0/0, ticks=494445/221499, in_queue=715944, util=98.58% 00:08:44.283 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:44.283 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:44.283 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:44.283 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:44.283 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:44.283 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64969 00:08:44.284 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:44.284 [global] 00:08:44.284 thread=1 00:08:44.284 invalidate=1 00:08:44.284 rw=randrw 00:08:44.284 time_based=1 00:08:44.284 runtime=6 00:08:44.284 ioengine=libaio 00:08:44.284 direct=1 00:08:44.284 bs=4096 00:08:44.284 iodepth=128 00:08:44.284 norandommap=0 00:08:44.284 numjobs=1 00:08:44.284 00:08:44.284 verify_dump=1 00:08:44.284 verify_backlog=512 00:08:44.284 verify_state_save=0 00:08:44.284 do_verify=1 00:08:44.284 verify=crc32c-intel 00:08:44.284 [job0] 00:08:44.284 filename=/dev/nvme0n1 00:08:44.284 Could not set queue depth (nvme0n1) 00:08:44.284 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:44.284 fio-3.35 00:08:44.284 Starting 1 thread 00:08:45.227 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:45.227 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:45.794 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:46.361 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64969 00:08:50.546 00:08:50.546 job0: (groupid=0, jobs=1): err= 0: pid=64994: Mon Nov 4 09:58:22 2024 00:08:50.546 read: IOPS=11.1k, BW=43.5MiB/s (45.7MB/s)(261MiB/6002msec) 00:08:50.546 slat (usec): min=3, max=10128, avg=43.40, stdev=200.70 00:08:50.546 clat (usec): min=317, max=18396, avg=7753.69, stdev=2332.90 00:08:50.546 lat (usec): min=334, max=18406, avg=7797.09, stdev=2346.21 00:08:50.546 clat percentiles (usec): 00:08:50.546 | 1.00th=[ 1532], 5.00th=[ 3195], 10.00th=[ 4178], 20.00th=[ 6194], 00:08:50.546 | 30.00th=[ 7373], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8455], 00:08:50.546 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11863], 00:08:50.546 | 99.00th=[13566], 99.50th=[14877], 99.90th=[16909], 99.95th=[17433], 00:08:50.546 | 99.99th=[17957] 00:08:50.546 bw ( KiB/s): min=12592, max=38008, per=53.96%, avg=24062.55, stdev=7647.62, samples=11 00:08:50.546 iops : min= 3148, max= 9502, avg=6015.64, stdev=1911.90, samples=11 00:08:50.546 write: IOPS=6650, BW=26.0MiB/s (27.2MB/s)(142MiB/5471msec); 0 zone resets 00:08:50.546 slat (usec): min=13, max=1926, avg=56.78, stdev=143.67 00:08:50.546 clat (usec): min=218, max=17296, avg=6687.78, stdev=2024.52 00:08:50.546 lat (usec): min=284, max=17318, avg=6744.55, stdev=2037.34 00:08:50.546 clat percentiles (usec): 00:08:50.546 | 1.00th=[ 1254], 5.00th=[ 2966], 10.00th=[ 3654], 20.00th=[ 4752], 00:08:50.546 | 30.00th=[ 6128], 40.00th=[ 7046], 50.00th=[ 7308], 60.00th=[ 7570], 00:08:50.546 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8717], 00:08:50.546 | 99.00th=[11731], 99.50th=[13304], 99.90th=[14877], 99.95th=[15533], 00:08:50.546 | 99.99th=[17171] 00:08:50.546 bw ( KiB/s): min=13392, max=37248, per=90.49%, avg=24070.55, stdev=7428.83, samples=11 00:08:50.546 iops : min= 3348, max= 9312, avg=6017.64, stdev=1857.21, samples=11 00:08:50.546 lat (usec) : 250=0.01%, 500=0.04%, 750=0.11%, 1000=0.27% 00:08:50.546 lat (msec) : 2=1.25%, 4=8.77%, 10=83.60%, 20=5.94% 00:08:50.546 cpu : usr=5.40%, sys=23.06%, ctx=6182, majf=0, minf=102 00:08:50.546 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:50.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.546 issued rwts: total=66910,36383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.546 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.546 00:08:50.546 Run status group 0 (all jobs): 00:08:50.546 READ: bw=43.5MiB/s (45.7MB/s), 43.5MiB/s-43.5MiB/s (45.7MB/s-45.7MB/s), io=261MiB (274MB), run=6002-6002msec 00:08:50.546 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=142MiB (149MB), run=5471-5471msec 00:08:50.546 00:08:50.546 Disk stats (read/write): 00:08:50.546 nvme0n1: ios=65935/35871, merge=0/0, ticks=488724/224892, in_queue=713616, util=98.62% 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:08:50.546 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.805 rmmod nvme_tcp 00:08:50.805 rmmod nvme_fabrics 00:08:50.805 rmmod nvme_keyring 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64772 ']' 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64772 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64772 ']' 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64772 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64772 00:08:50.805 killing process with pid 64772 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64772' 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64772 00:08:50.805 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64772 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:51.064 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:51.323 00:08:51.323 real 0m20.093s 00:08:51.323 user 1m15.037s 00:08:51.323 sys 0m9.642s 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.323 ************************************ 00:08:51.323 END TEST nvmf_target_multipath 00:08:51.323 ************************************ 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.323 ************************************ 00:08:51.323 START TEST nvmf_zcopy 00:08:51.323 ************************************ 00:08:51.323 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.323 * Looking for test storage... 00:08:51.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.583 --rc genhtml_branch_coverage=1 00:08:51.583 --rc genhtml_function_coverage=1 00:08:51.583 --rc genhtml_legend=1 00:08:51.583 --rc geninfo_all_blocks=1 00:08:51.583 --rc geninfo_unexecuted_blocks=1 00:08:51.583 00:08:51.583 ' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.583 --rc genhtml_branch_coverage=1 00:08:51.583 --rc genhtml_function_coverage=1 00:08:51.583 --rc genhtml_legend=1 00:08:51.583 --rc geninfo_all_blocks=1 00:08:51.583 --rc geninfo_unexecuted_blocks=1 00:08:51.583 00:08:51.583 ' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.583 --rc genhtml_branch_coverage=1 00:08:51.583 --rc genhtml_function_coverage=1 00:08:51.583 --rc genhtml_legend=1 00:08:51.583 --rc geninfo_all_blocks=1 00:08:51.583 --rc geninfo_unexecuted_blocks=1 00:08:51.583 00:08:51.583 ' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.583 --rc genhtml_branch_coverage=1 00:08:51.583 --rc genhtml_function_coverage=1 00:08:51.583 --rc genhtml_legend=1 00:08:51.583 --rc geninfo_all_blocks=1 00:08:51.583 --rc geninfo_unexecuted_blocks=1 00:08:51.583 00:08:51.583 ' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.583 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:51.584 Cannot find device "nvmf_init_br" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:51.584 Cannot find device "nvmf_init_br2" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:51.584 Cannot find device "nvmf_tgt_br" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.584 Cannot find device "nvmf_tgt_br2" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:51.584 Cannot find device "nvmf_init_br" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:51.584 Cannot find device "nvmf_init_br2" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:51.584 Cannot find device "nvmf_tgt_br" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:51.584 Cannot find device "nvmf_tgt_br2" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:51.584 Cannot find device "nvmf_br" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:51.584 Cannot find device "nvmf_init_if" 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:51.584 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:51.843 Cannot find device "nvmf_init_if2" 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:51.843 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:51.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:08:51.844 00:08:51.844 --- 10.0.0.3 ping statistics --- 00:08:51.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.844 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:51.844 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:51.844 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:08:51.844 00:08:51.844 --- 10.0.0.4 ping statistics --- 00:08:51.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.844 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:51.844 00:08:51.844 --- 10.0.0.1 ping statistics --- 00:08:51.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.844 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:51.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:08:51.844 00:08:51.844 --- 10.0.0.2 ping statistics --- 00:08:51.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.844 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.844 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.103 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65301 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65301 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65301 ']' 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:52.104 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.104 [2024-11-04 09:58:24.088253] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:52.104 [2024-11-04 09:58:24.088345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.104 [2024-11-04 09:58:24.243806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.362 [2024-11-04 09:58:24.312345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.362 [2024-11-04 09:58:24.312429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.362 [2024-11-04 09:58:24.312455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.362 [2024-11-04 09:58:24.312465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.363 [2024-11-04 09:58:24.312474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.363 [2024-11-04 09:58:24.312960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.363 [2024-11-04 09:58:24.369342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 [2024-11-04 09:58:24.481095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 [2024-11-04 09:58:24.497239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.363 malloc0 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.363 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.632 { 00:08:52.632 "params": { 00:08:52.632 "name": "Nvme$subsystem", 00:08:52.632 "trtype": "$TEST_TRANSPORT", 00:08:52.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.632 "adrfam": "ipv4", 00:08:52.632 "trsvcid": "$NVMF_PORT", 00:08:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.632 "hdgst": ${hdgst:-false}, 00:08:52.632 "ddgst": ${ddgst:-false} 00:08:52.632 }, 00:08:52.632 "method": "bdev_nvme_attach_controller" 00:08:52.632 } 00:08:52.632 EOF 00:08:52.632 )") 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:52.632 09:58:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.632 "params": { 00:08:52.632 "name": "Nvme1", 00:08:52.632 "trtype": "tcp", 00:08:52.632 "traddr": "10.0.0.3", 00:08:52.632 "adrfam": "ipv4", 00:08:52.632 "trsvcid": "4420", 00:08:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.632 "hdgst": false, 00:08:52.632 "ddgst": false 00:08:52.632 }, 00:08:52.632 "method": "bdev_nvme_attach_controller" 00:08:52.632 }' 00:08:52.632 [2024-11-04 09:58:24.600281] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:08:52.632 [2024-11-04 09:58:24.600416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65321 ] 00:08:52.632 [2024-11-04 09:58:24.755473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.897 [2024-11-04 09:58:24.823726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.897 [2024-11-04 09:58:24.890995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.898 Running I/O for 10 seconds... 00:08:55.209 5645.00 IOPS, 44.10 MiB/s [2024-11-04T09:58:28.372Z] 5752.50 IOPS, 44.94 MiB/s [2024-11-04T09:58:29.324Z] 5837.33 IOPS, 45.60 MiB/s [2024-11-04T09:58:30.259Z] 5910.25 IOPS, 46.17 MiB/s [2024-11-04T09:58:31.195Z] 5906.00 IOPS, 46.14 MiB/s [2024-11-04T09:58:32.131Z] 5893.67 IOPS, 46.04 MiB/s [2024-11-04T09:58:33.090Z] 5894.43 IOPS, 46.05 MiB/s [2024-11-04T09:58:34.027Z] 5891.38 IOPS, 46.03 MiB/s [2024-11-04T09:58:35.404Z] 5883.22 IOPS, 45.96 MiB/s [2024-11-04T09:58:35.404Z] 5916.30 IOPS, 46.22 MiB/s 00:09:03.234 Latency(us) 00:09:03.234 [2024-11-04T09:58:35.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.234 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:03.234 Verification LBA range: start 0x0 length 0x1000 00:09:03.234 Nvme1n1 : 10.02 5919.59 46.25 0.00 0.00 21556.92 3321.48 30742.34 00:09:03.234 [2024-11-04T09:58:35.404Z] =================================================================================================================== 00:09:03.234 [2024-11-04T09:58:35.404Z] Total : 5919.59 46.25 0.00 0.00 21556.92 3321.48 30742.34 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65444 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:03.234 { 00:09:03.234 "params": { 00:09:03.234 "name": "Nvme$subsystem", 00:09:03.234 "trtype": "$TEST_TRANSPORT", 00:09:03.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.234 "adrfam": "ipv4", 00:09:03.234 "trsvcid": "$NVMF_PORT", 00:09:03.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.234 "hdgst": ${hdgst:-false}, 00:09:03.234 "ddgst": ${ddgst:-false} 00:09:03.234 }, 00:09:03.234 "method": "bdev_nvme_attach_controller" 00:09:03.234 } 00:09:03.234 EOF 00:09:03.234 )") 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:03.234 [2024-11-04 09:58:35.233634] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.233693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:03.234 09:58:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:03.234 "params": { 00:09:03.234 "name": "Nvme1", 00:09:03.234 "trtype": "tcp", 00:09:03.234 "traddr": "10.0.0.3", 00:09:03.234 "adrfam": "ipv4", 00:09:03.234 "trsvcid": "4420", 00:09:03.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.234 "hdgst": false, 00:09:03.234 "ddgst": false 00:09:03.234 }, 00:09:03.234 "method": "bdev_nvme_attach_controller" 00:09:03.234 }' 00:09:03.234 [2024-11-04 09:58:35.245577] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.245646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.257609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.257683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.269579] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.269663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.281580] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.281663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.293599] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.293684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.305603] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.305687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.309198] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:09:03.234 [2024-11-04 09:58:35.310065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65444 ] 00:09:03.234 [2024-11-04 09:58:35.317615] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.317732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.329615] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.329682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.341641] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.341692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.353659] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.353715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.365670] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.365722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.377689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.377743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.389667] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.389728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.234 [2024-11-04 09:58:35.401688] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.234 [2024-11-04 09:58:35.401734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.493 [2024-11-04 09:58:35.413681] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.413747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.425698] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.425748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.437710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.437732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.449712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.449755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.457534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.494 [2024-11-04 09:58:35.461727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.461768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.473746] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.473796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.485742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.485788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.497748] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.497794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.509750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.509778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.521742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.494 [2024-11-04 09:58:35.521751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.521775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.533752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.533794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.545768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.545816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.557786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.557836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.569783] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.569817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.581779] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.581815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.586584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.494 [2024-11-04 09:58:35.593786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.593817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.605798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.605834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.617789] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.617817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.629798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.629830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.641796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.641842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.494 [2024-11-04 09:58:35.653804] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.494 [2024-11-04 09:58:35.653849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.665811] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.665854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.677822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.677868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.689840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.689889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.701864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.701912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 Running I/O for 5 seconds... 00:09:03.800 [2024-11-04 09:58:35.720538] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.720585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.734956] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.735003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.750860] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.750896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.769704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.769752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.784843] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.784892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.794896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.794934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.811179] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.811227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.827217] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.827274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.844301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.844350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.861098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.861148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.877771] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.877806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.894756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.894789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.911057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.911105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.927206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.927254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.944119] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.944181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.959453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.959501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.800 [2024-11-04 09:58:35.968673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.800 [2024-11-04 09:58:35.968705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:35.985712] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:35.985744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.001602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.001685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.020313] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.020375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.035035] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.035088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.051218] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.051266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.068612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.068670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.085309] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.085342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.101895] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.101941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.117797] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.117829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.136295] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.136344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.151308] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.151351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.167009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.167067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.184314] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.184363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.200132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.200181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.209201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.209249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.060 [2024-11-04 09:58:36.225070] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.060 [2024-11-04 09:58:36.225117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.240292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.240340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.256203] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.256250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.272798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.272847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.288707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.288739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.306678] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.306726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.323897] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.323929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.339717] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.339748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.358732] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.358765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.373719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.373752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.391241] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.391288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.407759] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.407793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.424200] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.424247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.441076] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.441123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.456426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.456473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.320 [2024-11-04 09:58:36.474467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.320 [2024-11-04 09:58:36.474515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.490990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.491037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.508170] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.508220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.524991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.525071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.539618] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.539652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.557427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.557476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.571785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.571820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.587948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.588015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.604817] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.604870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.619363] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.619416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.637334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.637390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.652785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.652821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.670687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.670726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.685505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.685560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.703208] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.703262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 11436.00 IOPS, 89.34 MiB/s [2024-11-04T09:58:36.749Z] [2024-11-04 09:58:36.718374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.718414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.579 [2024-11-04 09:58:36.736476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.579 [2024-11-04 09:58:36.736531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.751402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.751454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.766687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.766737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.776700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.776748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.792224] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.792273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.807964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.808013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.825201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.825251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.841778] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.841827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.859131] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.859179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.875278] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.875328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.892259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.892308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.907737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.907774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.917329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.917376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.933231] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.933268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.949994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.950058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.967095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.967146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:36.985065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:36.985115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:04.838 [2024-11-04 09:58:37.000782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:04.838 [2024-11-04 09:58:37.000822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.018384] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.018438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.033342] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.033393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.050752] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.050804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.066132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.066186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.084174] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.084226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.098917] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.098990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.108922] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.108960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.125007] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.125059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.135160] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.135201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.150221] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.150255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.165826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.165860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.175346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.175395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.191205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.191252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.207368] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.207417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.224175] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.224226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.241989] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.242041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.097 [2024-11-04 09:58:37.256776] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.097 [2024-11-04 09:58:37.256824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.355 [2024-11-04 09:58:37.272944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.355 [2024-11-04 09:58:37.272996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.355 [2024-11-04 09:58:37.289584] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.355 [2024-11-04 09:58:37.289662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.355 [2024-11-04 09:58:37.305784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.355 [2024-11-04 09:58:37.305819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.355 [2024-11-04 09:58:37.315156] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.355 [2024-11-04 09:58:37.315206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.355 [2024-11-04 09:58:37.326761] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.355 [2024-11-04 09:58:37.326808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.343884] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.343920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.360279] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.360327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.377769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.377808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.393833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.393877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.411476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.411527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.427056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.427108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.436667] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.436700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.453250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.453301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.470401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.470451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.487996] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.488033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.499875] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.499912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.356 [2024-11-04 09:58:37.516182] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.356 [2024-11-04 09:58:37.516234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.532977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.533020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.549796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.549833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.559881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.559915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.574818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.574851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.591051] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.591099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.609343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.609394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.624042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.624093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.639405] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.639456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.656407] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.656461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.673143] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.673183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.689883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.689929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.705572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.705642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 11507.00 IOPS, 89.90 MiB/s [2024-11-04T09:58:37.785Z] [2024-11-04 09:58:37.724150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.724190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.738946] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.739000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.751684] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.751739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.615 [2024-11-04 09:58:37.770815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.615 [2024-11-04 09:58:37.770871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.787349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.787399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.803368] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.803418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.819280] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.819333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.836898] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.836953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.854685] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.854731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.870092] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.870159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.880080] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.880134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.896868] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.896920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.911937] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.911989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.922113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.922149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.938188] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.938225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.954378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.954426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.973690] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.973746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:37.988361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:37.988419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:38.004148] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:38.004199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:38.021707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:38.021766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:05.874 [2024-11-04 09:58:38.037619] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:05.874 [2024-11-04 09:58:38.037678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.055502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.055549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.070238] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.070287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.086031] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.086087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.095785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.095818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.111792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.111831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.128625] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.128676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.146344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.146392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.161428] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.161461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.170921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.170957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.187225] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.187282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.203736] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.203782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.221458] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.221510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.237858] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.237894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.255176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.255232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.269900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.269954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.133 [2024-11-04 09:58:38.285900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.133 [2024-11-04 09:58:38.285986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.303466] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.303517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.318640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.318686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.328840] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.328873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.343661] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.343711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.358819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.358869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.368085] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.368132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.384577] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.384697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.401324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.401373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.419201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.419249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.434055] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.434111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.449995] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.450047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.467074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.467140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.482692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.482742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.492533] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.492588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.509215] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.509266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.524530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.524581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.542328] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.542363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.393 [2024-11-04 09:58:38.557549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.393 [2024-11-04 09:58:38.557598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.651 [2024-11-04 09:58:38.566986] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.651 [2024-11-04 09:58:38.567021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.651 [2024-11-04 09:58:38.583096] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.651 [2024-11-04 09:58:38.583161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.651 [2024-11-04 09:58:38.599342] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.599393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.618477] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.618549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.633438] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.633505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.643056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.643118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.658807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.658841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.676486] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.676534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.691751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.691786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.701346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.701394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 11379.00 IOPS, 88.90 MiB/s [2024-11-04T09:58:38.822Z] [2024-11-04 09:58:38.717344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.717393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.734530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.734578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.749909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.749944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.759008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.759072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.775812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.775844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.792191] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.792239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.807887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.807921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.652 [2024-11-04 09:58:38.817353] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.652 [2024-11-04 09:58:38.817401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.832481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.910 [2024-11-04 09:58:38.832529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.848699] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.910 [2024-11-04 09:58:38.848736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.865850] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.910 [2024-11-04 09:58:38.865905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.878477] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.910 [2024-11-04 09:58:38.878537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.897528] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.910 [2024-11-04 09:58:38.897624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.914522] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.910 [2024-11-04 09:58:38.914583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.910 [2024-11-04 09:58:38.930822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:38.930872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:38.943476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:38.943539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:38.962071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:38.962111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:38.977531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:38.977648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:38.994271] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:38.994322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:39.010784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:39.010821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:39.026903] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:39.026938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:39.043344] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:39.043379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:39.059801] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:39.059833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.911 [2024-11-04 09:58:39.077237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.911 [2024-11-04 09:58:39.077287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.091893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.091929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.107896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.107930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.127007] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.127054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.141552] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.141612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.158716] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.158773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.174485] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.174520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.193243] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.193271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.208425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.208474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.217773] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.217805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.234223] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.234271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.249910] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.249977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.260134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.260179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.275940] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.275987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.292270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.292319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.309482] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.309530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.170 [2024-11-04 09:58:39.325559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.170 [2024-11-04 09:58:39.325619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.343124] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.343185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.360551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.360628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.378236] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.378285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.393113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.393162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.408948] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.408991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.426339] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.426377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.442455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.442503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.459894] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.459932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.475738] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.475773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.494198] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.494248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.509000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.509049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.524880] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.524916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.543201] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.543247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.559785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.559819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.576394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.576442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.428 [2024-11-04 09:58:39.592903] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.428 [2024-11-04 09:58:39.592939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.610029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.610078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.626554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.626633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.645327] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.645375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.660402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.660451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.678800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.678858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.693988] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.694066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.703276] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.703321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 11313.25 IOPS, 88.38 MiB/s [2024-11-04T09:58:39.856Z] [2024-11-04 09:58:39.719994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.720042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.736900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.736940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.753494] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.753541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.770176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.770211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.787493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.787540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.803001] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.803034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.818864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.818914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.835445] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.835505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.686 [2024-11-04 09:58:39.854016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.686 [2024-11-04 09:58:39.854063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.868611] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.868668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.880404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.880450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.895951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.895985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.913460] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.913502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.928808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.928849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.947048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.947085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.961691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.961724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.973301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.973350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:39.988033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:39.988089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.003805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.003840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.013935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.013996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.028993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.029029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.047263] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.047302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.062354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.062415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.080266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.080319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.095066] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.095114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.112555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.112627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.127595] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.127639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.997 [2024-11-04 09:58:40.137192] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.997 [2024-11-04 09:58:40.137239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.152578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.152674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.169340] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.169379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.185805] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.185840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.202057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.202094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.219395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.219432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.229660] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.229694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.244798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.244832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.265330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.265369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.276655] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.276685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.289380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.289414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.308193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.308243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.322848] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.322882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.338401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.338452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.255 [2024-11-04 09:58:40.357783] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.255 [2024-11-04 09:58:40.357817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.256 [2024-11-04 09:58:40.373019] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.256 [2024-11-04 09:58:40.373052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.256 [2024-11-04 09:58:40.391467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.256 [2024-11-04 09:58:40.391500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.256 [2024-11-04 09:58:40.406038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.256 [2024-11-04 09:58:40.406072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.256 [2024-11-04 09:58:40.421686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.256 [2024-11-04 09:58:40.421751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.438699] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.438731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.455077] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.455126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.471639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.471672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.487500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.487570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.504291] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.504349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.521349] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.521407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.513 [2024-11-04 09:58:40.538183] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.513 [2024-11-04 09:58:40.538228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.556331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.556366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.571381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.571444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.590803] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.590838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.606110] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.606159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.623294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.623328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.640576] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.640662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.653322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.653364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.514 [2024-11-04 09:58:40.672833] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.514 [2024-11-04 09:58:40.672878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.689168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.689209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.705719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.705752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 11299.80 IOPS, 88.28 MiB/s 00:09:08.772 Latency(us) 00:09:08.772 [2024-11-04T09:58:40.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.772 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:08.772 Nvme1n1 : 5.01 11304.33 88.32 0.00 0.00 11311.01 4706.68 22878.02 00:09:08.772 [2024-11-04T09:58:40.942Z] =================================================================================================================== 00:09:08.772 [2024-11-04T09:58:40.942Z] Total : 11304.33 88.32 0.00 0.00 11311.01 4706.68 22878.02 00:09:08.772 [2024-11-04 09:58:40.717967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.718000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.729973] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.730031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.742006] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.742056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.754003] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.754050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.766029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.766087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.778016] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.778084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.790014] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.790084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.802034] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.802098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.814030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.814090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.826043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.826099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.838038] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.838094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.850048] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.850113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.862042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.862072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.874057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.874116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.886056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.886111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.898053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.772 [2024-11-04 09:58:40.898081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.772 [2024-11-04 09:58:40.910034] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.773 [2024-11-04 09:58:40.910077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.773 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65444) - No such process 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65444 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:08.773 delay0 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.773 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.031 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.031 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:09.031 [2024-11-04 09:58:41.120561] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:15.593 Initializing NVMe Controllers 00:09:15.593 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.593 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:15.593 Initialization complete. Launching workers. 00:09:15.593 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 144 00:09:15.593 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 431, failed to submit 33 00:09:15.593 success 327, unsuccessful 104, failed 0 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.593 rmmod nvme_tcp 00:09:15.593 rmmod nvme_fabrics 00:09:15.593 rmmod nvme_keyring 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65301 ']' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65301 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65301 ']' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65301 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65301 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:15.593 killing process with pid 65301 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65301' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65301 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65301 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:15.593 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:15.852 00:09:15.852 real 0m24.383s 00:09:15.852 user 0m39.816s 00:09:15.852 sys 0m6.851s 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.852 ************************************ 00:09:15.852 END TEST nvmf_zcopy 00:09:15.852 ************************************ 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.852 ************************************ 00:09:15.852 START TEST nvmf_nmic 00:09:15.852 ************************************ 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:15.852 * Looking for test storage... 00:09:15.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:15.852 09:58:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.112 --rc genhtml_branch_coverage=1 00:09:16.112 --rc genhtml_function_coverage=1 00:09:16.112 --rc genhtml_legend=1 00:09:16.112 --rc geninfo_all_blocks=1 00:09:16.112 --rc geninfo_unexecuted_blocks=1 00:09:16.112 00:09:16.112 ' 00:09:16.112 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.113 --rc genhtml_branch_coverage=1 00:09:16.113 --rc genhtml_function_coverage=1 00:09:16.113 --rc genhtml_legend=1 00:09:16.113 --rc geninfo_all_blocks=1 00:09:16.113 --rc geninfo_unexecuted_blocks=1 00:09:16.113 00:09:16.113 ' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.113 --rc genhtml_branch_coverage=1 00:09:16.113 --rc genhtml_function_coverage=1 00:09:16.113 --rc genhtml_legend=1 00:09:16.113 --rc geninfo_all_blocks=1 00:09:16.113 --rc geninfo_unexecuted_blocks=1 00:09:16.113 00:09:16.113 ' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.113 --rc genhtml_branch_coverage=1 00:09:16.113 --rc genhtml_function_coverage=1 00:09:16.113 --rc genhtml_legend=1 00:09:16.113 --rc geninfo_all_blocks=1 00:09:16.113 --rc geninfo_unexecuted_blocks=1 00:09:16.113 00:09:16.113 ' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.113 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:16.113 Cannot find device "nvmf_init_br" 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:16.113 Cannot find device "nvmf_init_br2" 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:16.113 Cannot find device "nvmf_tgt_br" 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.113 Cannot find device "nvmf_tgt_br2" 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:16.113 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:16.113 Cannot find device "nvmf_init_br" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:16.114 Cannot find device "nvmf_init_br2" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:16.114 Cannot find device "nvmf_tgt_br" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:16.114 Cannot find device "nvmf_tgt_br2" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.114 Cannot find device "nvmf_br" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.114 Cannot find device "nvmf_init_if" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.114 Cannot find device "nvmf_init_if2" 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.114 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:16.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:16.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:09:16.373 00:09:16.373 --- 10.0.0.3 ping statistics --- 00:09:16.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.373 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:16.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:16.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:09:16.373 00:09:16.373 --- 10.0.0.4 ping statistics --- 00:09:16.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.373 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:16.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:16.373 00:09:16.373 --- 10.0.0.1 ping statistics --- 00:09:16.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.373 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:16.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:09:16.373 00:09:16.373 --- 10.0.0.2 ping statistics --- 00:09:16.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.373 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.373 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65818 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65818 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65818 ']' 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:16.374 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.633 [2024-11-04 09:58:48.593671] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:09:16.633 [2024-11-04 09:58:48.593749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.633 [2024-11-04 09:58:48.749904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.892 [2024-11-04 09:58:48.821283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.892 [2024-11-04 09:58:48.821361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.892 [2024-11-04 09:58:48.821380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.892 [2024-11-04 09:58:48.821391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.892 [2024-11-04 09:58:48.821400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.892 [2024-11-04 09:58:48.822661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.892 [2024-11-04 09:58:48.822751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.892 [2024-11-04 09:58:48.822887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.892 [2024-11-04 09:58:48.822893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.892 [2024-11-04 09:58:48.882956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.892 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.892 [2024-11-04 09:58:48.996300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.892 Malloc0 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.892 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:16.892 [2024-11-04 09:58:49.061617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.150 test case1: single bdev can't be used in multiple subsystems 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.150 [2024-11-04 09:58:49.085400] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:17.150 [2024-11-04 09:58:49.085436] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:17.150 [2024-11-04 09:58:49.085448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.150 request: 00:09:17.150 { 00:09:17.150 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:17.150 "namespace": { 00:09:17.150 "bdev_name": "Malloc0", 00:09:17.150 "no_auto_visible": false 00:09:17.150 }, 00:09:17.150 "method": "nvmf_subsystem_add_ns", 00:09:17.150 "req_id": 1 00:09:17.150 } 00:09:17.150 Got JSON-RPC error response 00:09:17.150 response: 00:09:17.150 { 00:09:17.150 "code": -32602, 00:09:17.150 "message": "Invalid parameters" 00:09:17.150 } 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:17.150 Adding namespace failed - expected result. 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:17.150 test case2: host connect to nvmf target in multiple paths 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.150 [2024-11-04 09:58:49.097534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:17.150 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:17.408 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:17.408 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:17.408 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:17.408 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:17.408 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:19.310 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:19.310 [global] 00:09:19.310 thread=1 00:09:19.310 invalidate=1 00:09:19.310 rw=write 00:09:19.310 time_based=1 00:09:19.310 runtime=1 00:09:19.310 ioengine=libaio 00:09:19.310 direct=1 00:09:19.310 bs=4096 00:09:19.310 iodepth=1 00:09:19.310 norandommap=0 00:09:19.310 numjobs=1 00:09:19.310 00:09:19.310 verify_dump=1 00:09:19.310 verify_backlog=512 00:09:19.310 verify_state_save=0 00:09:19.310 do_verify=1 00:09:19.310 verify=crc32c-intel 00:09:19.310 [job0] 00:09:19.310 filename=/dev/nvme0n1 00:09:19.310 Could not set queue depth (nvme0n1) 00:09:19.569 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.569 fio-3.35 00:09:19.569 Starting 1 thread 00:09:20.507 00:09:20.507 job0: (groupid=0, jobs=1): err= 0: pid=65902: Mon Nov 4 09:58:52 2024 00:09:20.507 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:20.507 slat (nsec): min=10856, max=70349, avg=13828.55, stdev=4719.84 00:09:20.507 clat (usec): min=137, max=394, avg=178.71, stdev=20.25 00:09:20.507 lat (usec): min=151, max=406, avg=192.54, stdev=21.07 00:09:20.507 clat percentiles (usec): 00:09:20.507 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:09:20.507 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:09:20.507 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 212], 00:09:20.507 | 99.00th=[ 231], 99.50th=[ 243], 99.90th=[ 375], 99.95th=[ 383], 00:09:20.507 | 99.99th=[ 396] 00:09:20.507 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:20.507 slat (usec): min=14, max=104, avg=21.13, stdev= 6.66 00:09:20.507 clat (usec): min=84, max=373, avg=108.56, stdev=16.75 00:09:20.507 lat (usec): min=102, max=395, avg=129.70, stdev=19.15 00:09:20.507 clat percentiles (usec): 00:09:20.507 | 1.00th=[ 89], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:09:20.507 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 110], 00:09:20.507 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 126], 95.00th=[ 135], 00:09:20.507 | 99.00th=[ 157], 99.50th=[ 172], 99.90th=[ 302], 99.95th=[ 338], 00:09:20.507 | 99.99th=[ 375] 00:09:20.507 bw ( KiB/s): min=12288, max=12288, per=99.97%, avg=12288.00, stdev= 0.00, samples=1 00:09:20.507 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:20.507 lat (usec) : 100=13.99%, 250=85.65%, 500=0.36% 00:09:20.507 cpu : usr=2.20%, sys=8.40%, ctx=6148, majf=0, minf=5 00:09:20.507 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.507 issued rwts: total=3072,3076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.507 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.507 00:09:20.507 Run status group 0 (all jobs): 00:09:20.507 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:20.507 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:20.507 00:09:20.507 Disk stats (read/write): 00:09:20.507 nvme0n1: ios=2613/3072, merge=0/0, ticks=489/356, in_queue=845, util=91.28% 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.766 rmmod nvme_tcp 00:09:20.766 rmmod nvme_fabrics 00:09:20.766 rmmod nvme_keyring 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65818 ']' 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65818 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65818 ']' 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65818 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65818 00:09:20.766 killing process with pid 65818 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65818' 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65818 00:09:20.766 09:58:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65818 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:21.027 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.289 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:21.289 00:09:21.289 real 0m5.538s 00:09:21.289 user 0m16.012s 00:09:21.289 sys 0m2.370s 00:09:21.289 ************************************ 00:09:21.289 END TEST nvmf_nmic 00:09:21.289 ************************************ 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.290 ************************************ 00:09:21.290 START TEST nvmf_fio_target 00:09:21.290 ************************************ 00:09:21.290 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:21.549 * Looking for test storage... 00:09:21.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.549 --rc genhtml_branch_coverage=1 00:09:21.549 --rc genhtml_function_coverage=1 00:09:21.549 --rc genhtml_legend=1 00:09:21.549 --rc geninfo_all_blocks=1 00:09:21.549 --rc geninfo_unexecuted_blocks=1 00:09:21.549 00:09:21.549 ' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.549 --rc genhtml_branch_coverage=1 00:09:21.549 --rc genhtml_function_coverage=1 00:09:21.549 --rc genhtml_legend=1 00:09:21.549 --rc geninfo_all_blocks=1 00:09:21.549 --rc geninfo_unexecuted_blocks=1 00:09:21.549 00:09:21.549 ' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.549 --rc genhtml_branch_coverage=1 00:09:21.549 --rc genhtml_function_coverage=1 00:09:21.549 --rc genhtml_legend=1 00:09:21.549 --rc geninfo_all_blocks=1 00:09:21.549 --rc geninfo_unexecuted_blocks=1 00:09:21.549 00:09:21.549 ' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.549 --rc genhtml_branch_coverage=1 00:09:21.549 --rc genhtml_function_coverage=1 00:09:21.549 --rc genhtml_legend=1 00:09:21.549 --rc geninfo_all_blocks=1 00:09:21.549 --rc geninfo_unexecuted_blocks=1 00:09:21.549 00:09:21.549 ' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:21.549 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.550 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:21.550 Cannot find device "nvmf_init_br" 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:21.550 Cannot find device "nvmf_init_br2" 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:21.550 Cannot find device "nvmf_tgt_br" 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:21.550 Cannot find device "nvmf_tgt_br2" 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:21.550 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:21.809 Cannot find device "nvmf_init_br" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:21.809 Cannot find device "nvmf_init_br2" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:21.809 Cannot find device "nvmf_tgt_br" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:21.809 Cannot find device "nvmf_tgt_br2" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:21.809 Cannot find device "nvmf_br" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:21.809 Cannot find device "nvmf_init_if" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:21.809 Cannot find device "nvmf_init_if2" 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:21.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:21.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:21.809 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:22.069 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:22.069 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:22.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:22.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:22.069 00:09:22.069 --- 10.0.0.3 ping statistics --- 00:09:22.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.069 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:22.069 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:22.069 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:22.069 00:09:22.069 --- 10.0.0.4 ping statistics --- 00:09:22.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.069 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:22.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:22.069 00:09:22.069 --- 10.0.0.1 ping statistics --- 00:09:22.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.069 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:22.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:09:22.069 00:09:22.069 --- 10.0.0.2 ping statistics --- 00:09:22.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.069 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66134 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66134 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 66134 ']' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:22.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:22.069 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.069 [2024-11-04 09:58:54.160758] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:09:22.069 [2024-11-04 09:58:54.160854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.360 [2024-11-04 09:58:54.311511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.360 [2024-11-04 09:58:54.374595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.360 [2024-11-04 09:58:54.374844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.360 [2024-11-04 09:58:54.374981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.360 [2024-11-04 09:58:54.375050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.360 [2024-11-04 09:58:54.375193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.360 [2024-11-04 09:58:54.376433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.360 [2024-11-04 09:58:54.376505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.360 [2024-11-04 09:58:54.376670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.360 [2024-11-04 09:58:54.376670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.360 [2024-11-04 09:58:54.432299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.360 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.360 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:22.360 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.360 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.360 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.618 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:22.879 [2024-11-04 09:58:54.819470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.879 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.137 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:23.137 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.396 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:23.396 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:23.963 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:23.963 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.223 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:24.223 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:24.482 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.741 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:24.741 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.999 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:24.999 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.258 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:25.258 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:25.517 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.775 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:25.775 09:58:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.034 09:58:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:26.034 09:58:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:26.292 09:58:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:26.550 [2024-11-04 09:58:58.632304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:26.550 09:58:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:26.808 09:58:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:27.067 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:27.325 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:27.325 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:27.325 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.325 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:27.325 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:27.326 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:29.230 09:59:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.230 [global] 00:09:29.230 thread=1 00:09:29.230 invalidate=1 00:09:29.230 rw=write 00:09:29.230 time_based=1 00:09:29.230 runtime=1 00:09:29.230 ioengine=libaio 00:09:29.230 direct=1 00:09:29.230 bs=4096 00:09:29.230 iodepth=1 00:09:29.230 norandommap=0 00:09:29.230 numjobs=1 00:09:29.230 00:09:29.230 verify_dump=1 00:09:29.230 verify_backlog=512 00:09:29.230 verify_state_save=0 00:09:29.230 do_verify=1 00:09:29.230 verify=crc32c-intel 00:09:29.230 [job0] 00:09:29.230 filename=/dev/nvme0n1 00:09:29.230 [job1] 00:09:29.230 filename=/dev/nvme0n2 00:09:29.230 [job2] 00:09:29.230 filename=/dev/nvme0n3 00:09:29.230 [job3] 00:09:29.230 filename=/dev/nvme0n4 00:09:29.230 Could not set queue depth (nvme0n1) 00:09:29.230 Could not set queue depth (nvme0n2) 00:09:29.230 Could not set queue depth (nvme0n3) 00:09:29.230 Could not set queue depth (nvme0n4) 00:09:29.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.489 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.489 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.489 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.489 fio-3.35 00:09:29.489 Starting 4 threads 00:09:30.865 00:09:30.865 job0: (groupid=0, jobs=1): err= 0: pid=66311: Mon Nov 4 09:59:02 2024 00:09:30.865 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:30.865 slat (nsec): min=11375, max=54358, avg=15507.63, stdev=4082.03 00:09:30.865 clat (usec): min=243, max=383, avg=296.40, stdev=23.12 00:09:30.865 lat (usec): min=258, max=412, avg=311.91, stdev=23.52 00:09:30.865 clat percentiles (usec): 00:09:30.865 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:09:30.865 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:09:30.865 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 330], 95.00th=[ 343], 00:09:30.865 | 99.00th=[ 363], 99.50th=[ 367], 99.90th=[ 383], 99.95th=[ 383], 00:09:30.865 | 99.99th=[ 383] 00:09:30.865 write: IOPS=2044, BW=8180KiB/s (8376kB/s)(8188KiB/1001msec); 0 zone resets 00:09:30.865 slat (usec): min=15, max=109, avg=22.48, stdev= 6.34 00:09:30.865 clat (usec): min=174, max=352, avg=228.96, stdev=23.46 00:09:30.865 lat (usec): min=197, max=380, avg=251.44, stdev=24.40 00:09:30.865 clat percentiles (usec): 00:09:30.865 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:09:30.865 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:09:30.865 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 273], 00:09:30.865 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 318], 99.95th=[ 322], 00:09:30.865 | 99.99th=[ 355] 00:09:30.865 bw ( KiB/s): min= 8192, max= 8192, per=21.18%, avg=8192.00, stdev= 0.00, samples=1 00:09:30.865 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:30.865 lat (usec) : 250=47.11%, 500=52.89% 00:09:30.865 cpu : usr=2.10%, sys=5.50%, ctx=3583, majf=0, minf=11 00:09:30.865 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 issued rwts: total=1536,2047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.866 job1: (groupid=0, jobs=1): err= 0: pid=66312: Mon Nov 4 09:59:02 2024 00:09:30.866 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:30.866 slat (nsec): min=9590, max=50991, avg=11999.36, stdev=3075.25 00:09:30.866 clat (usec): min=146, max=347, avg=194.91, stdev=20.61 00:09:30.866 lat (usec): min=160, max=357, avg=206.91, stdev=20.61 00:09:30.866 clat percentiles (usec): 00:09:30.866 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:09:30.866 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:09:30.866 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 235], 00:09:30.866 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 285], 00:09:30.866 | 99.99th=[ 347] 00:09:30.866 write: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:09:30.866 slat (usec): min=13, max=108, avg=18.92, stdev= 5.33 00:09:30.866 clat (usec): min=101, max=663, avg=137.71, stdev=23.66 00:09:30.866 lat (usec): min=118, max=694, avg=156.63, stdev=24.49 00:09:30.866 clat percentiles (usec): 00:09:30.866 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 124], 00:09:30.866 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 139], 00:09:30.866 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 172], 00:09:30.866 | 99.00th=[ 192], 99.50th=[ 208], 99.90th=[ 498], 99.95th=[ 594], 00:09:30.866 | 99.99th=[ 668] 00:09:30.866 bw ( KiB/s): min=12288, max=12288, per=31.76%, avg=12288.00, stdev= 0.00, samples=1 00:09:30.866 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:30.866 lat (usec) : 250=99.22%, 500=0.74%, 750=0.04% 00:09:30.866 cpu : usr=2.10%, sys=6.50%, ctx=5515, majf=0, minf=11 00:09:30.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 issued rwts: total=2560,2954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.866 job2: (groupid=0, jobs=1): err= 0: pid=66313: Mon Nov 4 09:59:02 2024 00:09:30.866 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:30.866 slat (nsec): min=10368, max=77637, avg=12950.35, stdev=3411.55 00:09:30.866 clat (usec): min=160, max=290, avg=204.65, stdev=19.45 00:09:30.866 lat (usec): min=173, max=302, avg=217.60, stdev=19.47 00:09:30.866 clat percentiles (usec): 00:09:30.866 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:30.866 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:09:30.866 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 241], 00:09:30.866 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 289], 00:09:30.866 | 99.99th=[ 293] 00:09:30.866 write: IOPS=2629, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:09:30.866 slat (nsec): min=14107, max=97573, avg=20774.11, stdev=6046.75 00:09:30.866 clat (usec): min=110, max=1408, avg=144.41, stdev=34.58 00:09:30.866 lat (usec): min=130, max=1432, avg=165.19, stdev=35.42 00:09:30.866 clat percentiles (usec): 00:09:30.866 | 1.00th=[ 116], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:09:30.866 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:09:30.866 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 178], 00:09:30.866 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 441], 99.95th=[ 840], 00:09:30.866 | 99.99th=[ 1401] 00:09:30.866 bw ( KiB/s): min=12288, max=12288, per=31.76%, avg=12288.00, stdev= 0.00, samples=1 00:09:30.866 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:30.866 lat (usec) : 250=98.69%, 500=1.27%, 1000=0.02% 00:09:30.866 lat (msec) : 2=0.02% 00:09:30.866 cpu : usr=2.00%, sys=6.80%, ctx=5192, majf=0, minf=5 00:09:30.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 issued rwts: total=2560,2632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.866 job3: (groupid=0, jobs=1): err= 0: pid=66314: Mon Nov 4 09:59:02 2024 00:09:30.866 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:30.866 slat (nsec): min=7757, max=50145, avg=10062.59, stdev=2958.56 00:09:30.866 clat (usec): min=247, max=396, avg=302.34, stdev=23.98 00:09:30.866 lat (usec): min=255, max=416, avg=312.40, stdev=24.04 00:09:30.866 clat percentiles (usec): 00:09:30.866 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 281], 00:09:30.866 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:09:30.866 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 338], 95.00th=[ 347], 00:09:30.866 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 396], 99.95th=[ 396], 00:09:30.866 | 99.99th=[ 396] 00:09:30.866 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:30.866 slat (usec): min=10, max=120, avg=16.53, stdev= 5.14 00:09:30.866 clat (usec): min=80, max=363, avg=235.20, stdev=24.99 00:09:30.866 lat (usec): min=190, max=381, avg=251.74, stdev=24.84 00:09:30.866 clat percentiles (usec): 00:09:30.866 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 215], 00:09:30.866 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:09:30.866 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:09:30.866 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 330], 99.95th=[ 351], 00:09:30.866 | 99.99th=[ 363] 00:09:30.866 bw ( KiB/s): min= 8208, max= 8208, per=21.22%, avg=8208.00, stdev= 0.00, samples=1 00:09:30.866 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:09:30.866 lat (usec) : 100=0.03%, 250=43.05%, 500=56.92% 00:09:30.866 cpu : usr=1.10%, sys=4.10%, ctx=3586, majf=0, minf=9 00:09:30.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.866 issued rwts: total=1536,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.866 00:09:30.866 Run status group 0 (all jobs): 00:09:30.866 READ: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:09:30.866 WRITE: bw=37.8MiB/s (39.6MB/s), 8180KiB/s-11.5MiB/s (8376kB/s-12.1MB/s), io=37.8MiB (39.7MB), run=1001-1001msec 00:09:30.866 00:09:30.866 Disk stats (read/write): 00:09:30.866 nvme0n1: ios=1557/1536, merge=0/0, ticks=468/355, in_queue=823, util=88.16% 00:09:30.866 nvme0n2: ios=2247/2560, merge=0/0, ticks=454/373, in_queue=827, util=88.45% 00:09:30.866 nvme0n3: ios=2048/2484, merge=0/0, ticks=429/371, in_queue=800, util=89.26% 00:09:30.866 nvme0n4: ios=1507/1536, merge=0/0, ticks=421/315, in_queue=736, util=89.81% 00:09:30.866 09:59:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:30.866 [global] 00:09:30.866 thread=1 00:09:30.866 invalidate=1 00:09:30.866 rw=randwrite 00:09:30.866 time_based=1 00:09:30.866 runtime=1 00:09:30.866 ioengine=libaio 00:09:30.866 direct=1 00:09:30.866 bs=4096 00:09:30.866 iodepth=1 00:09:30.866 norandommap=0 00:09:30.866 numjobs=1 00:09:30.866 00:09:30.866 verify_dump=1 00:09:30.866 verify_backlog=512 00:09:30.866 verify_state_save=0 00:09:30.866 do_verify=1 00:09:30.866 verify=crc32c-intel 00:09:30.866 [job0] 00:09:30.866 filename=/dev/nvme0n1 00:09:30.866 [job1] 00:09:30.866 filename=/dev/nvme0n2 00:09:30.866 [job2] 00:09:30.866 filename=/dev/nvme0n3 00:09:30.866 [job3] 00:09:30.866 filename=/dev/nvme0n4 00:09:30.866 Could not set queue depth (nvme0n1) 00:09:30.866 Could not set queue depth (nvme0n2) 00:09:30.866 Could not set queue depth (nvme0n3) 00:09:30.866 Could not set queue depth (nvme0n4) 00:09:30.866 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.866 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.866 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.866 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.866 fio-3.35 00:09:30.866 Starting 4 threads 00:09:32.242 00:09:32.242 job0: (groupid=0, jobs=1): err= 0: pid=66378: Mon Nov 4 09:59:04 2024 00:09:32.242 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:32.242 slat (nsec): min=7736, max=37823, avg=9616.26, stdev=2230.81 00:09:32.242 clat (usec): min=225, max=611, avg=300.00, stdev=27.22 00:09:32.242 lat (usec): min=234, max=623, avg=309.62, stdev=27.26 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:09:32.242 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:09:32.242 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 343], 00:09:32.242 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 594], 99.95th=[ 611], 00:09:32.242 | 99.99th=[ 611] 00:09:32.242 write: IOPS=1981, BW=7924KiB/s (8114kB/s)(7932KiB/1001msec); 0 zone resets 00:09:32.242 slat (usec): min=9, max=246, avg=16.06, stdev= 7.38 00:09:32.242 clat (usec): min=188, max=868, avg=246.12, stdev=28.53 00:09:32.242 lat (usec): min=203, max=882, avg=262.18, stdev=30.01 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 225], 00:09:32.242 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:09:32.242 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:09:32.242 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 449], 99.95th=[ 873], 00:09:32.242 | 99.99th=[ 873] 00:09:32.242 bw ( KiB/s): min= 8192, max= 8192, per=21.48%, avg=8192.00, stdev= 0.00, samples=1 00:09:32.242 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:32.242 lat (usec) : 250=34.70%, 500=65.19%, 750=0.09%, 1000=0.03% 00:09:32.242 cpu : usr=1.20%, sys=3.80%, ctx=3522, majf=0, minf=19 00:09:32.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.242 issued rwts: total=1536,1983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.242 job1: (groupid=0, jobs=1): err= 0: pid=66379: Mon Nov 4 09:59:04 2024 00:09:32.242 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:32.242 slat (nsec): min=10170, max=62384, avg=12139.22, stdev=2798.90 00:09:32.242 clat (usec): min=148, max=2037, avg=196.59, stdev=41.37 00:09:32.242 lat (usec): min=160, max=2050, avg=208.73, stdev=41.51 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:09:32.242 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:32.242 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 231], 00:09:32.242 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 396], 99.95th=[ 502], 00:09:32.242 | 99.99th=[ 2040] 00:09:32.242 write: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:09:32.242 slat (usec): min=13, max=146, avg=17.82, stdev= 4.51 00:09:32.242 clat (usec): min=97, max=1592, avg=133.94, stdev=34.21 00:09:32.242 lat (usec): min=114, max=1612, avg=151.77, stdev=35.04 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 122], 00:09:32.242 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:09:32.242 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 153], 95.00th=[ 161], 00:09:32.242 | 99.00th=[ 198], 99.50th=[ 245], 99.90th=[ 449], 99.95th=[ 570], 00:09:32.242 | 99.99th=[ 1598] 00:09:32.242 bw ( KiB/s): min=12288, max=12288, per=32.22%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.242 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.242 lat (usec) : 100=0.02%, 250=99.28%, 500=0.63%, 750=0.04% 00:09:32.242 lat (msec) : 2=0.02%, 4=0.02% 00:09:32.242 cpu : usr=1.40%, sys=7.00%, ctx=5576, majf=0, minf=6 00:09:32.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.242 issued rwts: total=2560,3016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.242 job2: (groupid=0, jobs=1): err= 0: pid=66380: Mon Nov 4 09:59:04 2024 00:09:32.242 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:32.242 slat (nsec): min=8022, max=49075, avg=14457.96, stdev=2901.80 00:09:32.242 clat (usec): min=222, max=669, avg=294.67, stdev=26.70 00:09:32.242 lat (usec): min=236, max=702, avg=309.13, stdev=27.10 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:09:32.242 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:09:32.242 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 334], 00:09:32.242 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 611], 99.95th=[ 668], 00:09:32.242 | 99.99th=[ 668] 00:09:32.242 write: IOPS=1982, BW=7928KiB/s (8118kB/s)(7936KiB/1001msec); 0 zone resets 00:09:32.242 slat (usec): min=15, max=109, avg=22.60, stdev= 6.55 00:09:32.242 clat (usec): min=159, max=848, avg=239.01, stdev=26.74 00:09:32.242 lat (usec): min=179, max=868, avg=261.61, stdev=28.67 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:09:32.242 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:09:32.242 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:09:32.242 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 367], 99.95th=[ 848], 00:09:32.242 | 99.99th=[ 848] 00:09:32.242 bw ( KiB/s): min= 8208, max= 8208, per=21.52%, avg=8208.00, stdev= 0.00, samples=1 00:09:32.242 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:09:32.242 lat (usec) : 250=40.51%, 500=59.38%, 750=0.09%, 1000=0.03% 00:09:32.242 cpu : usr=2.60%, sys=4.80%, ctx=3520, majf=0, minf=9 00:09:32.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.242 issued rwts: total=1536,1984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.242 job3: (groupid=0, jobs=1): err= 0: pid=66381: Mon Nov 4 09:59:04 2024 00:09:32.242 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(9.93MiB/1001msec) 00:09:32.242 slat (nsec): min=10423, max=67264, avg=15269.62, stdev=7167.50 00:09:32.242 clat (usec): min=158, max=469, avg=208.57, stdev=20.08 00:09:32.242 lat (usec): min=170, max=481, avg=223.83, stdev=22.82 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:09:32.242 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:32.242 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 235], 95.00th=[ 245], 00:09:32.242 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 306], 00:09:32.242 | 99.99th=[ 469] 00:09:32.242 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:32.242 slat (usec): min=13, max=121, avg=19.44, stdev= 6.14 00:09:32.242 clat (usec): min=111, max=298, avg=145.51, stdev=16.24 00:09:32.242 lat (usec): min=131, max=362, avg=164.95, stdev=18.38 00:09:32.242 clat percentiles (usec): 00:09:32.242 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 133], 00:09:32.242 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:09:32.242 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 176], 00:09:32.243 | 99.00th=[ 192], 99.50th=[ 202], 99.90th=[ 241], 99.95th=[ 281], 00:09:32.243 | 99.99th=[ 297] 00:09:32.243 bw ( KiB/s): min=12288, max=12288, per=32.22%, avg=12288.00, stdev= 0.00, samples=1 00:09:32.243 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:32.243 lat (usec) : 250=98.33%, 500=1.67% 00:09:32.243 cpu : usr=1.90%, sys=7.30%, ctx=5102, majf=0, minf=15 00:09:32.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:32.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:32.243 issued rwts: total=2542,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:32.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:32.243 00:09:32.243 Run status group 0 (all jobs): 00:09:32.243 READ: bw=31.9MiB/s (33.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.9MiB (33.5MB), run=1001-1001msec 00:09:32.243 WRITE: bw=37.2MiB/s (39.0MB/s), 7924KiB/s-11.8MiB/s (8114kB/s-12.3MB/s), io=37.3MiB (39.1MB), run=1001-1001msec 00:09:32.243 00:09:32.243 Disk stats (read/write): 00:09:32.243 nvme0n1: ios=1516/1536, merge=0/0, ticks=436/333, in_queue=769, util=88.08% 00:09:32.243 nvme0n2: ios=2270/2560, merge=0/0, ticks=472/367, in_queue=839, util=88.25% 00:09:32.243 nvme0n3: ios=1466/1536, merge=0/0, ticks=430/373, in_queue=803, util=89.15% 00:09:32.243 nvme0n4: ios=2048/2477, merge=0/0, ticks=427/379, in_queue=806, util=89.71% 00:09:32.243 09:59:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:32.243 [global] 00:09:32.243 thread=1 00:09:32.243 invalidate=1 00:09:32.243 rw=write 00:09:32.243 time_based=1 00:09:32.243 runtime=1 00:09:32.243 ioengine=libaio 00:09:32.243 direct=1 00:09:32.243 bs=4096 00:09:32.243 iodepth=128 00:09:32.243 norandommap=0 00:09:32.243 numjobs=1 00:09:32.243 00:09:32.243 verify_dump=1 00:09:32.243 verify_backlog=512 00:09:32.243 verify_state_save=0 00:09:32.243 do_verify=1 00:09:32.243 verify=crc32c-intel 00:09:32.243 [job0] 00:09:32.243 filename=/dev/nvme0n1 00:09:32.243 [job1] 00:09:32.243 filename=/dev/nvme0n2 00:09:32.243 [job2] 00:09:32.243 filename=/dev/nvme0n3 00:09:32.243 [job3] 00:09:32.243 filename=/dev/nvme0n4 00:09:32.243 Could not set queue depth (nvme0n1) 00:09:32.243 Could not set queue depth (nvme0n2) 00:09:32.243 Could not set queue depth (nvme0n3) 00:09:32.243 Could not set queue depth (nvme0n4) 00:09:32.243 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.243 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.243 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.243 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:32.243 fio-3.35 00:09:32.243 Starting 4 threads 00:09:33.621 00:09:33.621 job0: (groupid=0, jobs=1): err= 0: pid=66436: Mon Nov 4 09:59:05 2024 00:09:33.621 read: IOPS=2810, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1002msec) 00:09:33.621 slat (usec): min=5, max=6896, avg=166.67, stdev=738.85 00:09:33.621 clat (usec): min=393, max=43965, avg=20561.24, stdev=4686.65 00:09:33.621 lat (usec): min=4312, max=43980, avg=20727.90, stdev=4715.80 00:09:33.621 clat percentiles (usec): 00:09:33.621 | 1.00th=[ 7963], 5.00th=[14877], 10.00th=[15926], 20.00th=[17695], 00:09:33.621 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19006], 60.00th=[20317], 00:09:33.621 | 70.00th=[22938], 80.00th=[25822], 90.00th=[26346], 95.00th=[26608], 00:09:33.621 | 99.00th=[34341], 99.50th=[37487], 99.90th=[43779], 99.95th=[43779], 00:09:33.621 | 99.99th=[43779] 00:09:33.621 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:33.621 slat (usec): min=12, max=8770, avg=164.41, stdev=718.69 00:09:33.621 clat (usec): min=8560, max=61107, avg=22283.02, stdev=13231.70 00:09:33.621 lat (usec): min=8610, max=61132, avg=22447.44, stdev=13323.59 00:09:33.621 clat percentiles (usec): 00:09:33.621 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:09:33.621 | 30.00th=[12518], 40.00th=[12780], 50.00th=[15401], 60.00th=[16450], 00:09:33.621 | 70.00th=[30802], 80.00th=[35390], 90.00th=[44827], 95.00th=[49546], 00:09:33.621 | 99.00th=[54789], 99.50th=[54789], 99.90th=[61080], 99.95th=[61080], 00:09:33.621 | 99.99th=[61080] 00:09:33.621 bw ( KiB/s): min=10952, max=13624, per=25.03%, avg=12288.00, stdev=1889.39, samples=2 00:09:33.621 iops : min= 2738, max= 3406, avg=3072.00, stdev=472.35, samples=2 00:09:33.621 lat (usec) : 500=0.02% 00:09:33.621 lat (msec) : 10=1.22%, 20=61.12%, 50=35.34%, 100=2.29% 00:09:33.621 cpu : usr=4.40%, sys=7.79%, ctx=295, majf=0, minf=11 00:09:33.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:33.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.621 issued rwts: total=2816,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.621 job1: (groupid=0, jobs=1): err= 0: pid=66437: Mon Nov 4 09:59:05 2024 00:09:33.621 read: IOPS=2595, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1002msec) 00:09:33.621 slat (usec): min=7, max=6711, avg=176.38, stdev=650.13 00:09:33.621 clat (usec): min=1699, max=33983, avg=22617.66, stdev=3362.85 00:09:33.621 lat (usec): min=1711, max=34045, avg=22794.04, stdev=3372.94 00:09:33.621 clat percentiles (usec): 00:09:33.621 | 1.00th=[ 2089], 5.00th=[19268], 10.00th=[20317], 20.00th=[21627], 00:09:33.621 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22414], 60.00th=[22938], 00:09:33.622 | 70.00th=[23462], 80.00th=[24511], 90.00th=[25822], 95.00th=[27132], 00:09:33.622 | 99.00th=[29230], 99.50th=[29230], 99.90th=[33817], 99.95th=[33817], 00:09:33.622 | 99.99th=[33817] 00:09:33.622 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:09:33.622 slat (usec): min=8, max=7294, avg=169.41, stdev=623.30 00:09:33.622 clat (usec): min=3734, max=36622, avg=21852.66, stdev=5162.81 00:09:33.622 lat (usec): min=3747, max=36646, avg=22022.07, stdev=5189.20 00:09:33.622 clat percentiles (usec): 00:09:33.622 | 1.00th=[ 7111], 5.00th=[14222], 10.00th=[15533], 20.00th=[17171], 00:09:33.622 | 30.00th=[19530], 40.00th=[21103], 50.00th=[22152], 60.00th=[23200], 00:09:33.622 | 70.00th=[23987], 80.00th=[24511], 90.00th=[27657], 95.00th=[33424], 00:09:33.622 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:33.622 | 99.99th=[36439] 00:09:33.622 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:33.622 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:33.622 lat (msec) : 2=0.35%, 4=0.49%, 10=0.74%, 20=20.17%, 50=78.25% 00:09:33.622 cpu : usr=2.60%, sys=7.69%, ctx=867, majf=0, minf=17 00:09:33.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:33.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.622 issued rwts: total=2601,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.622 job2: (groupid=0, jobs=1): err= 0: pid=66438: Mon Nov 4 09:59:05 2024 00:09:33.622 read: IOPS=2724, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1004msec) 00:09:33.622 slat (usec): min=7, max=6869, avg=185.87, stdev=657.76 00:09:33.622 clat (usec): min=3545, max=40417, avg=23111.78, stdev=3429.99 00:09:33.622 lat (usec): min=6191, max=40431, avg=23297.65, stdev=3419.78 00:09:33.622 clat percentiles (usec): 00:09:33.622 | 1.00th=[15926], 5.00th=[18744], 10.00th=[19530], 20.00th=[20841], 00:09:33.622 | 30.00th=[21627], 40.00th=[22152], 50.00th=[22414], 60.00th=[22938], 00:09:33.622 | 70.00th=[24511], 80.00th=[25822], 90.00th=[27657], 95.00th=[28705], 00:09:33.622 | 99.00th=[34341], 99.50th=[34341], 99.90th=[36963], 99.95th=[40633], 00:09:33.622 | 99.99th=[40633] 00:09:33.622 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:09:33.622 slat (usec): min=9, max=7238, avg=152.98, stdev=597.56 00:09:33.622 clat (usec): min=11417, max=35088, avg=20608.58, stdev=4440.26 00:09:33.622 lat (usec): min=11432, max=35109, avg=20761.56, stdev=4476.56 00:09:33.622 clat percentiles (usec): 00:09:33.622 | 1.00th=[13173], 5.00th=[14877], 10.00th=[15926], 20.00th=[16909], 00:09:33.622 | 30.00th=[17433], 40.00th=[18220], 50.00th=[19792], 60.00th=[21365], 00:09:33.622 | 70.00th=[23200], 80.00th=[23987], 90.00th=[26346], 95.00th=[29230], 00:09:33.622 | 99.00th=[32900], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:09:33.622 | 99.99th=[34866] 00:09:33.622 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=2 00:09:33.622 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:33.622 lat (msec) : 4=0.02%, 10=0.21%, 20=33.51%, 50=66.26% 00:09:33.622 cpu : usr=2.49%, sys=8.28%, ctx=845, majf=0, minf=11 00:09:33.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:33.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.622 issued rwts: total=2735,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.622 job3: (groupid=0, jobs=1): err= 0: pid=66439: Mon Nov 4 09:59:05 2024 00:09:33.622 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:09:33.622 slat (usec): min=4, max=12107, avg=183.25, stdev=1036.87 00:09:33.622 clat (usec): min=10163, max=48974, avg=23538.45, stdev=8795.69 00:09:33.622 lat (usec): min=10179, max=48990, avg=23721.70, stdev=8804.74 00:09:33.622 clat percentiles (usec): 00:09:33.622 | 1.00th=[10683], 5.00th=[15008], 10.00th=[16319], 20.00th=[16712], 00:09:33.622 | 30.00th=[16909], 40.00th=[17433], 50.00th=[21365], 60.00th=[24511], 00:09:33.622 | 70.00th=[25560], 80.00th=[26870], 90.00th=[38536], 95.00th=[43254], 00:09:33.622 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:09:33.622 | 99.99th=[49021] 00:09:33.622 write: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1002msec); 0 zone resets 00:09:33.622 slat (usec): min=12, max=11320, avg=132.38, stdev=661.63 00:09:33.622 clat (usec): min=443, max=35963, avg=17298.11, stdev=5276.16 00:09:33.622 lat (usec): min=2916, max=35991, avg=17430.49, stdev=5258.92 00:09:33.622 clat percentiles (usec): 00:09:33.622 | 1.00th=[ 3752], 5.00th=[13304], 10.00th=[13435], 20.00th=[13566], 00:09:33.622 | 30.00th=[13960], 40.00th=[14222], 50.00th=[16909], 60.00th=[17171], 00:09:33.622 | 70.00th=[17433], 80.00th=[18744], 90.00th=[24773], 95.00th=[30802], 00:09:33.622 | 99.00th=[32113], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:09:33.622 | 99.99th=[35914] 00:09:33.622 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:33.622 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:33.622 lat (usec) : 500=0.02% 00:09:33.622 lat (msec) : 4=0.52%, 20=63.85%, 50=35.62% 00:09:33.622 cpu : usr=3.30%, sys=8.69%, ctx=196, majf=0, minf=11 00:09:33.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:33.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.622 issued rwts: total=3072,3105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.622 00:09:33.622 Run status group 0 (all jobs): 00:09:33.622 READ: bw=43.7MiB/s (45.8MB/s), 10.1MiB/s-12.0MiB/s (10.6MB/s-12.6MB/s), io=43.8MiB (46.0MB), run=1002-1004msec 00:09:33.622 WRITE: bw=47.9MiB/s (50.3MB/s), 12.0MiB/s-12.1MiB/s (12.5MB/s-12.7MB/s), io=48.1MiB (50.5MB), run=1002-1004msec 00:09:33.622 00:09:33.622 Disk stats (read/write): 00:09:33.622 nvme0n1: ios=2610/2775, merge=0/0, ticks=26645/24565, in_queue=51210, util=88.50% 00:09:33.622 nvme0n2: ios=2348/2560, merge=0/0, ticks=16817/17508, in_queue=34325, util=88.70% 00:09:33.622 nvme0n3: ios=2428/2560, merge=0/0, ticks=18345/16026, in_queue=34371, util=89.53% 00:09:33.622 nvme0n4: ios=2560/2592, merge=0/0, ticks=15419/10072, in_queue=25491, util=89.77% 00:09:33.622 09:59:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:33.622 [global] 00:09:33.622 thread=1 00:09:33.622 invalidate=1 00:09:33.622 rw=randwrite 00:09:33.622 time_based=1 00:09:33.622 runtime=1 00:09:33.622 ioengine=libaio 00:09:33.622 direct=1 00:09:33.622 bs=4096 00:09:33.622 iodepth=128 00:09:33.622 norandommap=0 00:09:33.622 numjobs=1 00:09:33.622 00:09:33.622 verify_dump=1 00:09:33.622 verify_backlog=512 00:09:33.622 verify_state_save=0 00:09:33.622 do_verify=1 00:09:33.622 verify=crc32c-intel 00:09:33.622 [job0] 00:09:33.622 filename=/dev/nvme0n1 00:09:33.622 [job1] 00:09:33.622 filename=/dev/nvme0n2 00:09:33.622 [job2] 00:09:33.622 filename=/dev/nvme0n3 00:09:33.622 [job3] 00:09:33.622 filename=/dev/nvme0n4 00:09:33.622 Could not set queue depth (nvme0n1) 00:09:33.622 Could not set queue depth (nvme0n2) 00:09:33.622 Could not set queue depth (nvme0n3) 00:09:33.622 Could not set queue depth (nvme0n4) 00:09:33.622 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.622 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.622 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.622 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:33.622 fio-3.35 00:09:33.622 Starting 4 threads 00:09:34.999 00:09:34.999 job0: (groupid=0, jobs=1): err= 0: pid=66492: Mon Nov 4 09:59:06 2024 00:09:34.999 read: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec) 00:09:34.999 slat (usec): min=6, max=18672, avg=384.88, stdev=1690.50 00:09:34.999 clat (usec): min=25861, max=95246, avg=52096.72, stdev=14585.42 00:09:34.999 lat (usec): min=27898, max=96377, avg=52481.60, stdev=14581.84 00:09:34.999 clat percentiles (usec): 00:09:34.999 | 1.00th=[27919], 5.00th=[32113], 10.00th=[34866], 20.00th=[39584], 00:09:34.999 | 30.00th=[42206], 40.00th=[43254], 50.00th=[48497], 60.00th=[55313], 00:09:34.999 | 70.00th=[60556], 80.00th=[64226], 90.00th=[72877], 95.00th=[74974], 00:09:34.999 | 99.00th=[94897], 99.50th=[94897], 99.90th=[94897], 99.95th=[94897], 00:09:34.999 | 99.99th=[94897] 00:09:34.999 write: IOPS=1126, BW=4508KiB/s (4616kB/s)(4580KiB/1016msec); 0 zone resets 00:09:34.999 slat (usec): min=6, max=24534, avg=524.56, stdev=2108.15 00:09:34.999 clat (msec): min=9, max=127, avg=65.80, stdev=36.37 00:09:34.999 lat (msec): min=9, max=127, avg=66.33, stdev=36.62 00:09:34.999 clat percentiles (msec): 00:09:34.999 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 24], 20.00th=[ 29], 00:09:34.999 | 30.00th=[ 37], 40.00th=[ 42], 50.00th=[ 59], 60.00th=[ 88], 00:09:34.999 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 112], 95.00th=[ 114], 00:09:34.999 | 99.00th=[ 114], 99.50th=[ 120], 99.90th=[ 126], 99.95th=[ 128], 00:09:34.999 | 99.99th=[ 128] 00:09:34.999 bw ( KiB/s): min= 3120, max= 5120, per=8.64%, avg=4120.00, stdev=1414.21, samples=2 00:09:34.999 iops : min= 780, max= 1280, avg=1030.00, stdev=353.55, samples=2 00:09:34.999 lat (msec) : 10=2.90%, 20=0.18%, 50=47.58%, 100=34.02%, 250=15.31% 00:09:34.999 cpu : usr=0.79%, sys=3.65%, ctx=316, majf=0, minf=7 00:09:34.999 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:09:34.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.999 issued rwts: total=1024,1145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.999 job1: (groupid=0, jobs=1): err= 0: pid=66493: Mon Nov 4 09:59:06 2024 00:09:34.999 read: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec) 00:09:34.999 slat (usec): min=5, max=36180, avg=463.99, stdev=2152.91 00:09:34.999 clat (msec): min=24, max=113, avg=53.42, stdev=16.92 00:09:34.999 lat (msec): min=25, max=121, avg=53.88, stdev=17.05 00:09:34.999 clat percentiles (msec): 00:09:34.999 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 43], 00:09:34.999 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 51], 60.00th=[ 56], 00:09:34.999 | 70.00th=[ 63], 80.00th=[ 65], 90.00th=[ 71], 95.00th=[ 92], 00:09:34.999 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 107], 99.95th=[ 114], 00:09:34.999 | 99.99th=[ 114] 00:09:34.999 write: IOPS=1219, BW=4879KiB/s (4996kB/s)(4952KiB/1015msec); 0 zone resets 00:09:34.999 slat (usec): min=6, max=31543, avg=419.55, stdev=2114.12 00:09:34.999 clat (msec): min=9, max=144, avg=59.65, stdev=39.43 00:09:34.999 lat (msec): min=10, max=149, avg=60.07, stdev=39.71 00:09:34.999 clat percentiles (msec): 00:09:34.999 | 1.00th=[ 11], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 21], 00:09:34.999 | 30.00th=[ 27], 40.00th=[ 37], 50.00th=[ 42], 60.00th=[ 80], 00:09:34.999 | 70.00th=[ 102], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 114], 00:09:34.999 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 144], 99.95th=[ 144], 00:09:34.999 | 99.99th=[ 144] 00:09:34.999 bw ( KiB/s): min= 2976, max= 5915, per=9.32%, avg=4445.50, stdev=2078.19, samples=2 00:09:34.999 iops : min= 744, max= 1478, avg=1111.00, stdev=519.02, samples=2 00:09:34.999 lat (msec) : 10=0.04%, 20=10.04%, 50=41.82%, 100=29.66%, 250=18.44% 00:09:34.999 cpu : usr=1.48%, sys=2.96%, ctx=320, majf=0, minf=9 00:09:34.999 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:09:34.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.999 issued rwts: total=1024,1238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.999 job2: (groupid=0, jobs=1): err= 0: pid=66494: Mon Nov 4 09:59:06 2024 00:09:34.999 read: IOPS=3799, BW=14.8MiB/s (15.6MB/s)(15.0MiB/1010msec) 00:09:34.999 slat (usec): min=7, max=13484, avg=135.63, stdev=927.48 00:09:34.999 clat (usec): min=3238, max=33726, avg=18755.52, stdev=3466.76 00:09:34.999 lat (usec): min=9169, max=36094, avg=18891.15, stdev=3535.70 00:09:34.999 clat percentiles (usec): 00:09:34.999 | 1.00th=[10028], 5.00th=[14746], 10.00th=[15401], 20.00th=[15664], 00:09:34.999 | 30.00th=[16057], 40.00th=[16450], 50.00th=[18482], 60.00th=[20841], 00:09:34.999 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22938], 95.00th=[23725], 00:09:34.999 | 99.00th=[25035], 99.50th=[25035], 99.90th=[31851], 99.95th=[32637], 00:09:34.999 | 99.99th=[33817] 00:09:34.999 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:09:34.999 slat (usec): min=7, max=19816, avg=110.57, stdev=711.36 00:09:34.999 clat (usec): min=6470, max=32958, avg=13628.93, stdev=3867.36 00:09:34.999 lat (usec): min=9050, max=32983, avg=13739.50, stdev=3836.54 00:09:34.999 clat percentiles (usec): 00:09:34.999 | 1.00th=[ 8356], 5.00th=[10814], 10.00th=[11207], 20.00th=[11600], 00:09:34.999 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:09:34.999 | 70.00th=[12780], 80.00th=[15139], 90.00th=[20055], 95.00th=[20841], 00:09:34.999 | 99.00th=[32113], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:09:34.999 | 99.99th=[32900] 00:09:34.999 bw ( KiB/s): min=16384, max=16416, per=34.40%, avg=16400.00, stdev=22.63, samples=2 00:09:34.999 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:09:34.999 lat (msec) : 4=0.01%, 10=1.44%, 20=69.99%, 50=28.56% 00:09:34.999 cpu : usr=3.87%, sys=11.00%, ctx=198, majf=0, minf=15 00:09:34.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:34.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.999 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.999 job3: (groupid=0, jobs=1): err= 0: pid=66495: Mon Nov 4 09:59:06 2024 00:09:34.999 read: IOPS=5362, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1002msec) 00:09:34.999 slat (usec): min=8, max=4563, avg=90.78, stdev=417.50 00:09:34.999 clat (usec): min=881, max=16281, avg=11581.93, stdev=1175.43 00:09:34.999 lat (usec): min=3813, max=16310, avg=11672.71, stdev=1198.10 00:09:34.999 clat percentiles (usec): 00:09:34.999 | 1.00th=[ 5735], 5.00th=[10159], 10.00th=[10552], 20.00th=[11207], 00:09:34.999 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:34.999 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[13435], 00:09:34.999 | 99.00th=[14877], 99.50th=[15270], 99.90th=[15795], 99.95th=[16057], 00:09:34.999 | 99.99th=[16319] 00:09:34.999 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:34.999 slat (usec): min=10, max=16754, avg=83.58, stdev=524.79 00:09:34.999 clat (usec): min=6944, max=28625, avg=11036.44, stdev=1658.33 00:09:34.999 lat (usec): min=6963, max=28663, avg=11120.02, stdev=1727.03 00:09:34.999 clat percentiles (usec): 00:09:34.999 | 1.00th=[ 7701], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:09:34.999 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:09:34.999 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[13960], 00:09:34.999 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21103], 99.95th=[28705], 00:09:34.999 | 99.99th=[28705] 00:09:34.999 bw ( KiB/s): min=21440, max=23616, per=47.25%, avg=22528.00, stdev=1538.66, samples=2 00:09:34.999 iops : min= 5360, max= 5904, avg=5632.00, stdev=384.67, samples=2 00:09:34.999 lat (usec) : 1000=0.01% 00:09:34.999 lat (msec) : 4=0.10%, 10=7.01%, 20=92.29%, 50=0.59% 00:09:34.999 cpu : usr=3.70%, sys=14.99%, ctx=356, majf=0, minf=13 00:09:34.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:34.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.999 issued rwts: total=5373,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.999 00:09:34.999 Run status group 0 (all jobs): 00:09:34.999 READ: bw=43.3MiB/s (45.4MB/s), 4031KiB/s-20.9MiB/s (4128kB/s-22.0MB/s), io=44.0MiB (46.1MB), run=1002-1016msec 00:09:34.999 WRITE: bw=46.6MiB/s (48.8MB/s), 4508KiB/s-22.0MiB/s (4616kB/s-23.0MB/s), io=47.3MiB (49.6MB), run=1002-1016msec 00:09:34.999 00:09:34.999 Disk stats (read/write): 00:09:34.999 nvme0n1: ios=769/1024, merge=0/0, ticks=18367/35572, in_queue=53939, util=87.07% 00:09:34.999 nvme0n2: ios=879/1024, merge=0/0, ticks=24309/28787, in_queue=53096, util=87.26% 00:09:34.999 nvme0n3: ios=3072/3584, merge=0/0, ticks=54503/46177, in_queue=100680, util=89.06% 00:09:34.999 nvme0n4: ios=4608/4699, merge=0/0, ticks=25703/21783, in_queue=47486, util=88.47% 00:09:34.999 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:35.000 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66515 00:09:35.000 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:35.000 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:35.000 [global] 00:09:35.000 thread=1 00:09:35.000 invalidate=1 00:09:35.000 rw=read 00:09:35.000 time_based=1 00:09:35.000 runtime=10 00:09:35.000 ioengine=libaio 00:09:35.000 direct=1 00:09:35.000 bs=4096 00:09:35.000 iodepth=1 00:09:35.000 norandommap=1 00:09:35.000 numjobs=1 00:09:35.000 00:09:35.000 [job0] 00:09:35.000 filename=/dev/nvme0n1 00:09:35.000 [job1] 00:09:35.000 filename=/dev/nvme0n2 00:09:35.000 [job2] 00:09:35.000 filename=/dev/nvme0n3 00:09:35.000 [job3] 00:09:35.000 filename=/dev/nvme0n4 00:09:35.000 Could not set queue depth (nvme0n1) 00:09:35.000 Could not set queue depth (nvme0n2) 00:09:35.000 Could not set queue depth (nvme0n3) 00:09:35.000 Could not set queue depth (nvme0n4) 00:09:35.000 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.000 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.000 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.000 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.000 fio-3.35 00:09:35.000 Starting 4 threads 00:09:38.282 09:59:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:38.282 fio: pid=66563, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:38.282 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63651840, buflen=4096 00:09:38.282 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:38.282 fio: pid=66562, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:38.282 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=35893248, buflen=4096 00:09:38.541 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.541 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:38.541 fio: pid=66560, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:38.541 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=39993344, buflen=4096 00:09:38.798 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:38.799 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:39.057 fio: pid=66561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:39.057 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=46915584, buflen=4096 00:09:39.057 00:09:39.057 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66560: Mon Nov 4 09:59:11 2024 00:09:39.057 read: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(38.1MiB/3462msec) 00:09:39.057 slat (usec): min=8, max=12710, avg=19.46, stdev=212.38 00:09:39.057 clat (usec): min=128, max=3417, avg=333.68, stdev=85.93 00:09:39.057 lat (usec): min=141, max=13023, avg=353.14, stdev=228.63 00:09:39.057 clat percentiles (usec): 00:09:39.057 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 223], 20.00th=[ 310], 00:09:39.057 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:09:39.057 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 408], 00:09:39.057 | 99.00th=[ 494], 99.50th=[ 586], 99.90th=[ 1205], 99.95th=[ 2073], 00:09:39.057 | 99.99th=[ 3425] 00:09:39.057 bw ( KiB/s): min= 9896, max=11176, per=22.45%, avg=10793.33, stdev=500.99, samples=6 00:09:39.057 iops : min= 2474, max= 2794, avg=2698.33, stdev=125.25, samples=6 00:09:39.057 lat (usec) : 250=12.37%, 500=86.68%, 750=0.70%, 1000=0.10% 00:09:39.057 lat (msec) : 2=0.09%, 4=0.05% 00:09:39.057 cpu : usr=0.98%, sys=4.31%, ctx=9771, majf=0, minf=1 00:09:39.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 issued rwts: total=9765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.057 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66561: Mon Nov 4 09:59:11 2024 00:09:39.057 read: IOPS=3024, BW=11.8MiB/s (12.4MB/s)(44.7MiB/3787msec) 00:09:39.057 slat (usec): min=7, max=14905, avg=22.18, stdev=236.27 00:09:39.057 clat (usec): min=124, max=7817, avg=306.62, stdev=127.29 00:09:39.057 lat (usec): min=136, max=15143, avg=328.81, stdev=269.28 00:09:39.057 clat percentiles (usec): 00:09:39.057 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 208], 00:09:39.057 | 30.00th=[ 297], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:09:39.057 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 420], 00:09:39.057 | 99.00th=[ 562], 99.50th=[ 611], 99.90th=[ 1254], 99.95th=[ 1565], 00:09:39.057 | 99.99th=[ 2802] 00:09:39.057 bw ( KiB/s): min= 9808, max=15761, per=23.72%, avg=11404.71, stdev=1973.44, samples=7 00:09:39.057 iops : min= 2452, max= 3940, avg=2851.14, stdev=493.27, samples=7 00:09:39.057 lat (usec) : 250=26.29%, 500=71.39%, 750=2.03%, 1000=0.10% 00:09:39.057 lat (msec) : 2=0.15%, 4=0.02%, 10=0.01% 00:09:39.057 cpu : usr=0.95%, sys=4.83%, ctx=11466, majf=0, minf=1 00:09:39.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 issued rwts: total=11455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.057 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66562: Mon Nov 4 09:59:11 2024 00:09:39.057 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(34.2MiB/3210msec) 00:09:39.057 slat (usec): min=7, max=11296, avg=17.20, stdev=146.09 00:09:39.057 clat (usec): min=145, max=3282, avg=347.66, stdev=74.21 00:09:39.057 lat (usec): min=158, max=11560, avg=364.86, stdev=163.20 00:09:39.057 clat percentiles (usec): 00:09:39.057 | 1.00th=[ 225], 5.00th=[ 273], 10.00th=[ 306], 20.00th=[ 322], 00:09:39.057 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:09:39.057 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 416], 00:09:39.057 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 1123], 99.95th=[ 2040], 00:09:39.057 | 99.99th=[ 3294] 00:09:39.057 bw ( KiB/s): min= 9896, max=11776, per=22.70%, avg=10914.67, stdev=643.03, samples=6 00:09:39.057 iops : min= 2474, max= 2944, avg=2728.67, stdev=160.76, samples=6 00:09:39.057 lat (usec) : 250=2.44%, 500=96.65%, 750=0.70%, 1000=0.09% 00:09:39.057 lat (msec) : 2=0.06%, 4=0.06% 00:09:39.057 cpu : usr=0.53%, sys=4.49%, ctx=8767, majf=0, minf=2 00:09:39.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 issued rwts: total=8764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.057 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66563: Mon Nov 4 09:59:11 2024 00:09:39.057 read: IOPS=5255, BW=20.5MiB/s (21.5MB/s)(60.7MiB/2957msec) 00:09:39.057 slat (nsec): min=9992, max=99964, avg=13221.88, stdev=3094.88 00:09:39.057 clat (usec): min=144, max=2070, avg=175.87, stdev=23.23 00:09:39.057 lat (usec): min=155, max=2082, avg=189.09, stdev=24.17 00:09:39.057 clat percentiles (usec): 00:09:39.057 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:09:39.057 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:09:39.057 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 208], 00:09:39.057 | 99.00th=[ 239], 99.50th=[ 251], 99.90th=[ 277], 99.95th=[ 297], 00:09:39.057 | 99.99th=[ 490] 00:09:39.057 bw ( KiB/s): min=19240, max=22080, per=43.40%, avg=20868.80, stdev=1081.55, samples=5 00:09:39.057 iops : min= 4810, max= 5520, avg=5217.20, stdev=270.39, samples=5 00:09:39.057 lat (usec) : 250=99.45%, 500=0.54% 00:09:39.057 lat (msec) : 4=0.01% 00:09:39.057 cpu : usr=1.01%, sys=6.77%, ctx=15542, majf=0, minf=2 00:09:39.057 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.057 issued rwts: total=15541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.057 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:39.057 00:09:39.057 Run status group 0 (all jobs): 00:09:39.057 READ: bw=47.0MiB/s (49.2MB/s), 10.7MiB/s-20.5MiB/s (11.2MB/s-21.5MB/s), io=178MiB (186MB), run=2957-3787msec 00:09:39.057 00:09:39.057 Disk stats (read/write): 00:09:39.057 nvme0n1: ios=9379/0, merge=0/0, ticks=3063/0, in_queue=3063, util=95.39% 00:09:39.057 nvme0n2: ios=10452/0, merge=0/0, ticks=3377/0, in_queue=3377, util=95.40% 00:09:39.057 nvme0n3: ios=8483/0, merge=0/0, ticks=2840/0, in_queue=2840, util=96.28% 00:09:39.057 nvme0n4: ios=15102/0, merge=0/0, ticks=2705/0, in_queue=2705, util=96.77% 00:09:39.057 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.057 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:39.316 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.316 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:39.573 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.573 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:39.832 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:39.832 09:59:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:40.092 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:40.092 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66515 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:40.685 nvmf hotplug test: fio failed as expected 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:40.685 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.944 rmmod nvme_tcp 00:09:40.944 rmmod nvme_fabrics 00:09:40.944 rmmod nvme_keyring 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66134 ']' 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66134 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 66134 ']' 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 66134 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66134 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:40.944 killing process with pid 66134 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66134' 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 66134 00:09:40.944 09:59:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 66134 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.203 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:41.463 00:09:41.463 real 0m19.972s 00:09:41.463 user 1m15.701s 00:09:41.463 sys 0m9.349s 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:41.463 ************************************ 00:09:41.463 END TEST nvmf_fio_target 00:09:41.463 ************************************ 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.463 ************************************ 00:09:41.463 START TEST nvmf_bdevio 00:09:41.463 ************************************ 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:41.463 * Looking for test storage... 00:09:41.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:41.463 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:41.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.723 --rc genhtml_branch_coverage=1 00:09:41.723 --rc genhtml_function_coverage=1 00:09:41.723 --rc genhtml_legend=1 00:09:41.723 --rc geninfo_all_blocks=1 00:09:41.723 --rc geninfo_unexecuted_blocks=1 00:09:41.723 00:09:41.723 ' 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:41.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.723 --rc genhtml_branch_coverage=1 00:09:41.723 --rc genhtml_function_coverage=1 00:09:41.723 --rc genhtml_legend=1 00:09:41.723 --rc geninfo_all_blocks=1 00:09:41.723 --rc geninfo_unexecuted_blocks=1 00:09:41.723 00:09:41.723 ' 00:09:41.723 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:41.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.723 --rc genhtml_branch_coverage=1 00:09:41.723 --rc genhtml_function_coverage=1 00:09:41.723 --rc genhtml_legend=1 00:09:41.723 --rc geninfo_all_blocks=1 00:09:41.724 --rc geninfo_unexecuted_blocks=1 00:09:41.724 00:09:41.724 ' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:41.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.724 --rc genhtml_branch_coverage=1 00:09:41.724 --rc genhtml_function_coverage=1 00:09:41.724 --rc genhtml_legend=1 00:09:41.724 --rc geninfo_all_blocks=1 00:09:41.724 --rc geninfo_unexecuted_blocks=1 00:09:41.724 00:09:41.724 ' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.724 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:41.724 Cannot find device "nvmf_init_br" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:41.724 Cannot find device "nvmf_init_br2" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:41.724 Cannot find device "nvmf_tgt_br" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:41.724 Cannot find device "nvmf_tgt_br2" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:41.724 Cannot find device "nvmf_init_br" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:41.724 Cannot find device "nvmf_init_br2" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:41.724 Cannot find device "nvmf_tgt_br" 00:09:41.724 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:41.725 Cannot find device "nvmf_tgt_br2" 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:41.725 Cannot find device "nvmf_br" 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:41.725 Cannot find device "nvmf_init_if" 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:41.725 Cannot find device "nvmf_init_if2" 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:41.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:41.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:41.725 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:41.984 09:59:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:41.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:41.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:41.984 00:09:41.984 --- 10.0.0.3 ping statistics --- 00:09:41.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.984 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:41.984 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:41.985 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:41.985 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:09:41.985 00:09:41.985 --- 10.0.0.4 ping statistics --- 00:09:41.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.985 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:41.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:41.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:41.985 00:09:41.985 --- 10.0.0.1 ping statistics --- 00:09:41.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.985 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:41.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:41.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:09:41.985 00:09:41.985 --- 10.0.0.2 ping statistics --- 00:09:41.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:41.985 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66883 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66883 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 66883 ']' 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:41.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:41.985 09:59:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.244 [2024-11-04 09:59:14.196910] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:09:42.244 [2024-11-04 09:59:14.197026] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.244 [2024-11-04 09:59:14.352769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.503 [2024-11-04 09:59:14.415331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.503 [2024-11-04 09:59:14.415409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.503 [2024-11-04 09:59:14.415420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.503 [2024-11-04 09:59:14.415429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.503 [2024-11-04 09:59:14.415437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.503 [2024-11-04 09:59:14.416610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.503 [2024-11-04 09:59:14.416705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.503 [2024-11-04 09:59:14.416766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.503 [2024-11-04 09:59:14.416769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.503 [2024-11-04 09:59:14.473251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.071 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.071 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:43.071 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.071 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.071 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.330 [2024-11-04 09:59:15.273069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.330 Malloc0 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.330 [2024-11-04 09:59:15.346224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:43.330 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:43.330 { 00:09:43.330 "params": { 00:09:43.330 "name": "Nvme$subsystem", 00:09:43.330 "trtype": "$TEST_TRANSPORT", 00:09:43.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.330 "adrfam": "ipv4", 00:09:43.330 "trsvcid": "$NVMF_PORT", 00:09:43.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.330 "hdgst": ${hdgst:-false}, 00:09:43.330 "ddgst": ${ddgst:-false} 00:09:43.331 }, 00:09:43.331 "method": "bdev_nvme_attach_controller" 00:09:43.331 } 00:09:43.331 EOF 00:09:43.331 )") 00:09:43.331 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:43.331 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:43.331 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:43.331 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:43.331 "params": { 00:09:43.331 "name": "Nvme1", 00:09:43.331 "trtype": "tcp", 00:09:43.331 "traddr": "10.0.0.3", 00:09:43.331 "adrfam": "ipv4", 00:09:43.331 "trsvcid": "4420", 00:09:43.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.331 "hdgst": false, 00:09:43.331 "ddgst": false 00:09:43.331 }, 00:09:43.331 "method": "bdev_nvme_attach_controller" 00:09:43.331 }' 00:09:43.331 [2024-11-04 09:59:15.397113] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:09:43.331 [2024-11-04 09:59:15.397193] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66919 ] 00:09:43.590 [2024-11-04 09:59:15.547729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.590 [2024-11-04 09:59:15.618477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.590 [2024-11-04 09:59:15.618585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.590 [2024-11-04 09:59:15.618608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.590 [2024-11-04 09:59:15.686761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.849 I/O targets: 00:09:43.849 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:43.849 00:09:43.849 00:09:43.849 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.849 http://cunit.sourceforge.net/ 00:09:43.849 00:09:43.849 00:09:43.849 Suite: bdevio tests on: Nvme1n1 00:09:43.849 Test: blockdev write read block ...passed 00:09:43.849 Test: blockdev write zeroes read block ...passed 00:09:43.849 Test: blockdev write zeroes read no split ...passed 00:09:43.849 Test: blockdev write zeroes read split ...passed 00:09:43.849 Test: blockdev write zeroes read split partial ...passed 00:09:43.849 Test: blockdev reset ...[2024-11-04 09:59:15.836488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:43.849 [2024-11-04 09:59:15.836613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c0180 (9): Bad file descriptor 00:09:43.849 [2024-11-04 09:59:15.852518] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:43.849 passed 00:09:43.849 Test: blockdev write read 8 blocks ...passed 00:09:43.849 Test: blockdev write read size > 128k ...passed 00:09:43.849 Test: blockdev write read invalid size ...passed 00:09:43.849 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:43.849 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:43.849 Test: blockdev write read max offset ...passed 00:09:43.849 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:43.849 Test: blockdev writev readv 8 blocks ...passed 00:09:43.849 Test: blockdev writev readv 30 x 1block ...passed 00:09:43.849 Test: blockdev writev readv block ...passed 00:09:43.849 Test: blockdev writev readv size > 128k ...passed 00:09:43.849 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:43.849 Test: blockdev comparev and writev ...[2024-11-04 09:59:15.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.862096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.862117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.862128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.862401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.862512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.862536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.862546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.863064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.863098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.863117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.863128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.863395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.863547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.863691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:43.849 [2024-11-04 09:59:15.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:43.849 passed 00:09:43.849 Test: blockdev nvme passthru rw ...passed 00:09:43.849 Test: blockdev nvme passthru vendor specific ...[2024-11-04 09:59:15.864830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.849 [2024-11-04 09:59:15.864856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.864966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.849 [2024-11-04 09:59:15.864983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:43.849 [2024-11-04 09:59:15.865085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.850 [2024-11-04 09:59:15.865100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:43.850 [2024-11-04 09:59:15.865282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:43.850 [2024-11-04 09:59:15.865305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:43.850 passed 00:09:43.850 Test: blockdev nvme admin passthru ...passed 00:09:43.850 Test: blockdev copy ...passed 00:09:43.850 00:09:43.850 Run Summary: Type Total Ran Passed Failed Inactive 00:09:43.850 suites 1 1 n/a 0 0 00:09:43.850 tests 23 23 23 0 0 00:09:43.850 asserts 152 152 152 0 n/a 00:09:43.850 00:09:43.850 Elapsed time = 0.143 seconds 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.109 rmmod nvme_tcp 00:09:44.109 rmmod nvme_fabrics 00:09:44.109 rmmod nvme_keyring 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66883 ']' 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66883 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 66883 ']' 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 66883 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66883 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:44.109 killing process with pid 66883 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66883' 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 66883 00:09:44.109 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 66883 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:44.371 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:44.638 00:09:44.638 real 0m3.194s 00:09:44.638 user 0m9.661s 00:09:44.638 sys 0m0.915s 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.638 ************************************ 00:09:44.638 END TEST nvmf_bdevio 00:09:44.638 ************************************ 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:44.638 ************************************ 00:09:44.638 END TEST nvmf_target_core 00:09:44.638 ************************************ 00:09:44.638 00:09:44.638 real 2m36.064s 00:09:44.638 user 6m54.889s 00:09:44.638 sys 0m51.921s 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.638 09:59:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:44.638 09:59:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:44.638 09:59:16 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:44.638 09:59:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:44.638 ************************************ 00:09:44.638 START TEST nvmf_target_extra 00:09:44.638 ************************************ 00:09:44.638 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:44.898 * Looking for test storage... 00:09:44.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.898 --rc genhtml_branch_coverage=1 00:09:44.898 --rc genhtml_function_coverage=1 00:09:44.898 --rc genhtml_legend=1 00:09:44.898 --rc geninfo_all_blocks=1 00:09:44.898 --rc geninfo_unexecuted_blocks=1 00:09:44.898 00:09:44.898 ' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.898 --rc genhtml_branch_coverage=1 00:09:44.898 --rc genhtml_function_coverage=1 00:09:44.898 --rc genhtml_legend=1 00:09:44.898 --rc geninfo_all_blocks=1 00:09:44.898 --rc geninfo_unexecuted_blocks=1 00:09:44.898 00:09:44.898 ' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.898 --rc genhtml_branch_coverage=1 00:09:44.898 --rc genhtml_function_coverage=1 00:09:44.898 --rc genhtml_legend=1 00:09:44.898 --rc geninfo_all_blocks=1 00:09:44.898 --rc geninfo_unexecuted_blocks=1 00:09:44.898 00:09:44.898 ' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.898 --rc genhtml_branch_coverage=1 00:09:44.898 --rc genhtml_function_coverage=1 00:09:44.898 --rc genhtml_legend=1 00:09:44.898 --rc geninfo_all_blocks=1 00:09:44.898 --rc geninfo_unexecuted_blocks=1 00:09:44.898 00:09:44.898 ' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.898 09:59:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.898 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.898 09:59:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.898 09:59:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.898 09:59:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.898 09:59:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.899 ************************************ 00:09:44.899 START TEST nvmf_auth_target 00:09:44.899 ************************************ 00:09:44.899 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:45.159 * Looking for test storage... 00:09:45.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.159 --rc genhtml_branch_coverage=1 00:09:45.159 --rc genhtml_function_coverage=1 00:09:45.159 --rc genhtml_legend=1 00:09:45.159 --rc geninfo_all_blocks=1 00:09:45.159 --rc geninfo_unexecuted_blocks=1 00:09:45.159 00:09:45.159 ' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.159 --rc genhtml_branch_coverage=1 00:09:45.159 --rc genhtml_function_coverage=1 00:09:45.159 --rc genhtml_legend=1 00:09:45.159 --rc geninfo_all_blocks=1 00:09:45.159 --rc geninfo_unexecuted_blocks=1 00:09:45.159 00:09:45.159 ' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.159 --rc genhtml_branch_coverage=1 00:09:45.159 --rc genhtml_function_coverage=1 00:09:45.159 --rc genhtml_legend=1 00:09:45.159 --rc geninfo_all_blocks=1 00:09:45.159 --rc geninfo_unexecuted_blocks=1 00:09:45.159 00:09:45.159 ' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.159 --rc genhtml_branch_coverage=1 00:09:45.159 --rc genhtml_function_coverage=1 00:09:45.159 --rc genhtml_legend=1 00:09:45.159 --rc geninfo_all_blocks=1 00:09:45.159 --rc geninfo_unexecuted_blocks=1 00:09:45.159 00:09:45.159 ' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.159 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.159 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:45.160 Cannot find device "nvmf_init_br" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:45.160 Cannot find device "nvmf_init_br2" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:45.160 Cannot find device "nvmf_tgt_br" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.160 Cannot find device "nvmf_tgt_br2" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:45.160 Cannot find device "nvmf_init_br" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:45.160 Cannot find device "nvmf_init_br2" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:45.160 Cannot find device "nvmf_tgt_br" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:45.160 Cannot find device "nvmf_tgt_br2" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:45.160 Cannot find device "nvmf_br" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:45.160 Cannot find device "nvmf_init_if" 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:45.160 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:45.420 Cannot find device "nvmf_init_if2" 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:45.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:45.420 00:09:45.420 --- 10.0.0.3 ping statistics --- 00:09:45.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.420 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:45.420 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:45.420 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:09:45.420 00:09:45.420 --- 10.0.0.4 ping statistics --- 00:09:45.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.420 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:45.420 00:09:45.420 --- 10.0.0.1 ping statistics --- 00:09:45.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.420 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:45.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:45.420 00:09:45.420 --- 10.0.0.2 ping statistics --- 00:09:45.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.420 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.420 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67203 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67203 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67203 ']' 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:45.679 09:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67233 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:45.938 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c0df2fb8731235624ff2413f94080e3f082cf41a4ecf4f84 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jGA 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c0df2fb8731235624ff2413f94080e3f082cf41a4ecf4f84 0 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c0df2fb8731235624ff2413f94080e3f082cf41a4ecf4f84 0 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c0df2fb8731235624ff2413f94080e3f082cf41a4ecf4f84 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:45.939 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jGA 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jGA 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.jGA 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8db7e68115461230cfad9c3298198dea2fd20e9da0bb5e64b71a8cd3e96ab9ae 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PLq 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8db7e68115461230cfad9c3298198dea2fd20e9da0bb5e64b71a8cd3e96ab9ae 3 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8db7e68115461230cfad9c3298198dea2fd20e9da0bb5e64b71a8cd3e96ab9ae 3 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8db7e68115461230cfad9c3298198dea2fd20e9da0bb5e64b71a8cd3e96ab9ae 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PLq 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PLq 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PLq 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=abe7ae968b07306a8a3e1217f120b9d8 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PeW 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key abe7ae968b07306a8a3e1217f120b9d8 1 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 abe7ae968b07306a8a3e1217f120b9d8 1 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=abe7ae968b07306a8a3e1217f120b9d8 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.198 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PeW 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PeW 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.PeW 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3534d50ea5adb274289757e7419b953fc2003a928a1ddabf 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1Zh 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3534d50ea5adb274289757e7419b953fc2003a928a1ddabf 2 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3534d50ea5adb274289757e7419b953fc2003a928a1ddabf 2 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3534d50ea5adb274289757e7419b953fc2003a928a1ddabf 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1Zh 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1Zh 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1Zh 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cfec20e7e466388a60c9a8aa8edd7b9e8b51ce26543bc259 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9cm 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cfec20e7e466388a60c9a8aa8edd7b9e8b51ce26543bc259 2 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cfec20e7e466388a60c9a8aa8edd7b9e8b51ce26543bc259 2 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cfec20e7e466388a60c9a8aa8edd7b9e8b51ce26543bc259 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:46.199 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9cm 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9cm 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.9cm 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c233e66356ca38e1ff8bd008565f51b7 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tMq 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c233e66356ca38e1ff8bd008565f51b7 1 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c233e66356ca38e1ff8bd008565f51b7 1 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c233e66356ca38e1ff8bd008565f51b7 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tMq 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tMq 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.tMq 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9777d836bb723820c47c3f3b2b7ff10f55030b39f150b755614e160b9115e2d0 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.709 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9777d836bb723820c47c3f3b2b7ff10f55030b39f150b755614e160b9115e2d0 3 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9777d836bb723820c47c3f3b2b7ff10f55030b39f150b755614e160b9115e2d0 3 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9777d836bb723820c47c3f3b2b7ff10f55030b39f150b755614e160b9115e2d0 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.709 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.709 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.709 00:09:46.458 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67203 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67203 ']' 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:46.459 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67233 /var/tmp/host.sock 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 67233 ']' 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:46.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:46.718 09:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jGA 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.977 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.235 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.235 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jGA 00:09:47.235 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jGA 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PLq ]] 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLq 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLq 00:09:47.494 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLq 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PeW 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.PeW 00:09:47.752 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.PeW 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1Zh ]] 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1Zh 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1Zh 00:09:48.009 09:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1Zh 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9cm 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9cm 00:09:48.268 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9cm 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.tMq ]] 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tMq 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tMq 00:09:48.526 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tMq 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.709 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.709 00:09:48.784 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.709 00:09:49.043 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:49.043 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:49.043 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:49.043 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.043 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.043 09:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.302 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.561 00:09:49.561 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.561 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.561 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.820 { 00:09:49.820 "cntlid": 1, 00:09:49.820 "qid": 0, 00:09:49.820 "state": "enabled", 00:09:49.820 "thread": "nvmf_tgt_poll_group_000", 00:09:49.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:09:49.820 "listen_address": { 00:09:49.820 "trtype": "TCP", 00:09:49.820 "adrfam": "IPv4", 00:09:49.820 "traddr": "10.0.0.3", 00:09:49.820 "trsvcid": "4420" 00:09:49.820 }, 00:09:49.820 "peer_address": { 00:09:49.820 "trtype": "TCP", 00:09:49.820 "adrfam": "IPv4", 00:09:49.820 "traddr": "10.0.0.1", 00:09:49.820 "trsvcid": "58148" 00:09:49.820 }, 00:09:49.820 "auth": { 00:09:49.820 "state": "completed", 00:09:49.820 "digest": "sha256", 00:09:49.820 "dhgroup": "null" 00:09:49.820 } 00:09:49.820 } 00:09:49.820 ]' 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:49.820 09:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.079 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.079 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.079 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.338 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:09:50.338 09:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.606 09:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:55.606 00:09:55.606 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:55.607 { 00:09:55.607 "cntlid": 3, 00:09:55.607 "qid": 0, 00:09:55.607 "state": "enabled", 00:09:55.607 "thread": "nvmf_tgt_poll_group_000", 00:09:55.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:09:55.607 "listen_address": { 00:09:55.607 "trtype": "TCP", 00:09:55.607 "adrfam": "IPv4", 00:09:55.607 "traddr": "10.0.0.3", 00:09:55.607 "trsvcid": "4420" 00:09:55.607 }, 00:09:55.607 "peer_address": { 00:09:55.607 "trtype": "TCP", 00:09:55.607 "adrfam": "IPv4", 00:09:55.607 "traddr": "10.0.0.1", 00:09:55.607 "trsvcid": "39922" 00:09:55.607 }, 00:09:55.607 "auth": { 00:09:55.607 "state": "completed", 00:09:55.607 "digest": "sha256", 00:09:55.607 "dhgroup": "null" 00:09:55.607 } 00:09:55.607 } 00:09:55.607 ]' 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:55.607 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:55.865 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:55.865 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:55.865 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:55.865 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:55.865 09:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.124 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:09:56.124 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:09:56.690 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:56.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:56.949 09:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.207 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:57.466 00:09:57.466 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.466 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.466 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.724 { 00:09:57.724 "cntlid": 5, 00:09:57.724 "qid": 0, 00:09:57.724 "state": "enabled", 00:09:57.724 "thread": "nvmf_tgt_poll_group_000", 00:09:57.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:09:57.724 "listen_address": { 00:09:57.724 "trtype": "TCP", 00:09:57.724 "adrfam": "IPv4", 00:09:57.724 "traddr": "10.0.0.3", 00:09:57.724 "trsvcid": "4420" 00:09:57.724 }, 00:09:57.724 "peer_address": { 00:09:57.724 "trtype": "TCP", 00:09:57.724 "adrfam": "IPv4", 00:09:57.724 "traddr": "10.0.0.1", 00:09:57.724 "trsvcid": "39954" 00:09:57.724 }, 00:09:57.724 "auth": { 00:09:57.724 "state": "completed", 00:09:57.724 "digest": "sha256", 00:09:57.724 "dhgroup": "null" 00:09:57.724 } 00:09:57.724 } 00:09:57.724 ]' 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.724 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.982 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:57.982 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.982 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.982 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.982 09:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.241 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:09:58.241 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:09:58.808 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.808 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:09:58.808 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.808 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.809 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.809 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.809 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:58.809 09:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:59.068 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:59.636 00:09:59.636 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.636 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.636 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.894 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.894 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.894 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.894 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.894 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.894 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.894 { 00:09:59.894 "cntlid": 7, 00:09:59.894 "qid": 0, 00:09:59.894 "state": "enabled", 00:09:59.894 "thread": "nvmf_tgt_poll_group_000", 00:09:59.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:09:59.894 "listen_address": { 00:09:59.894 "trtype": "TCP", 00:09:59.894 "adrfam": "IPv4", 00:09:59.894 "traddr": "10.0.0.3", 00:09:59.894 "trsvcid": "4420" 00:09:59.894 }, 00:09:59.894 "peer_address": { 00:09:59.894 "trtype": "TCP", 00:09:59.894 "adrfam": "IPv4", 00:09:59.895 "traddr": "10.0.0.1", 00:09:59.895 "trsvcid": "39986" 00:09:59.895 }, 00:09:59.895 "auth": { 00:09:59.895 "state": "completed", 00:09:59.895 "digest": "sha256", 00:09:59.895 "dhgroup": "null" 00:09:59.895 } 00:09:59.895 } 00:09:59.895 ]' 00:09:59.895 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.895 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.895 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.895 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:59.895 09:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.895 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.895 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.895 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:00.153 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:00.153 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:01.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.090 09:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.090 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.349 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.349 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.349 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.349 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:01.607 00:10:01.607 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.607 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.607 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.870 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.870 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.870 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.870 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.870 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.871 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.871 { 00:10:01.871 "cntlid": 9, 00:10:01.871 "qid": 0, 00:10:01.871 "state": "enabled", 00:10:01.871 "thread": "nvmf_tgt_poll_group_000", 00:10:01.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:01.871 "listen_address": { 00:10:01.871 "trtype": "TCP", 00:10:01.871 "adrfam": "IPv4", 00:10:01.871 "traddr": "10.0.0.3", 00:10:01.871 "trsvcid": "4420" 00:10:01.871 }, 00:10:01.871 "peer_address": { 00:10:01.871 "trtype": "TCP", 00:10:01.871 "adrfam": "IPv4", 00:10:01.871 "traddr": "10.0.0.1", 00:10:01.871 "trsvcid": "40014" 00:10:01.871 }, 00:10:01.871 "auth": { 00:10:01.871 "state": "completed", 00:10:01.871 "digest": "sha256", 00:10:01.871 "dhgroup": "ffdhe2048" 00:10:01.871 } 00:10:01.871 } 00:10:01.871 ]' 00:10:01.871 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.871 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.871 09:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.871 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:01.871 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:02.140 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:02.140 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:02.140 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:02.399 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:02.399 09:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:02.966 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:03.224 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:03.224 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:03.224 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:03.224 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:03.224 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.225 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:03.791 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.791 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.791 { 00:10:03.791 "cntlid": 11, 00:10:03.791 "qid": 0, 00:10:03.791 "state": "enabled", 00:10:03.791 "thread": "nvmf_tgt_poll_group_000", 00:10:03.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:03.791 "listen_address": { 00:10:03.791 "trtype": "TCP", 00:10:03.791 "adrfam": "IPv4", 00:10:03.791 "traddr": "10.0.0.3", 00:10:03.791 "trsvcid": "4420" 00:10:03.791 }, 00:10:03.791 "peer_address": { 00:10:03.791 "trtype": "TCP", 00:10:03.791 "adrfam": "IPv4", 00:10:03.791 "traddr": "10.0.0.1", 00:10:03.791 "trsvcid": "59428" 00:10:03.791 }, 00:10:03.791 "auth": { 00:10:03.791 "state": "completed", 00:10:03.791 "digest": "sha256", 00:10:03.791 "dhgroup": "ffdhe2048" 00:10:03.791 } 00:10:03.791 } 00:10:03.791 ]' 00:10:04.050 09:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:04.050 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:04.308 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:04.308 09:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.243 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.244 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.244 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.811 00:10:05.811 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.811 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.811 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.069 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.069 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.069 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.069 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.069 09:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.069 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.069 { 00:10:06.069 "cntlid": 13, 00:10:06.069 "qid": 0, 00:10:06.069 "state": "enabled", 00:10:06.069 "thread": "nvmf_tgt_poll_group_000", 00:10:06.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:06.069 "listen_address": { 00:10:06.069 "trtype": "TCP", 00:10:06.069 "adrfam": "IPv4", 00:10:06.069 "traddr": "10.0.0.3", 00:10:06.069 "trsvcid": "4420" 00:10:06.069 }, 00:10:06.069 "peer_address": { 00:10:06.069 "trtype": "TCP", 00:10:06.069 "adrfam": "IPv4", 00:10:06.069 "traddr": "10.0.0.1", 00:10:06.069 "trsvcid": "59458" 00:10:06.069 }, 00:10:06.069 "auth": { 00:10:06.070 "state": "completed", 00:10:06.070 "digest": "sha256", 00:10:06.070 "dhgroup": "ffdhe2048" 00:10:06.070 } 00:10:06.070 } 00:10:06.070 ]' 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.070 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.342 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:06.342 09:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:06.948 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.207 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.775 00:10:07.775 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.775 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.775 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.033 { 00:10:08.033 "cntlid": 15, 00:10:08.033 "qid": 0, 00:10:08.033 "state": "enabled", 00:10:08.033 "thread": "nvmf_tgt_poll_group_000", 00:10:08.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:08.033 "listen_address": { 00:10:08.033 "trtype": "TCP", 00:10:08.033 "adrfam": "IPv4", 00:10:08.033 "traddr": "10.0.0.3", 00:10:08.033 "trsvcid": "4420" 00:10:08.033 }, 00:10:08.033 "peer_address": { 00:10:08.033 "trtype": "TCP", 00:10:08.033 "adrfam": "IPv4", 00:10:08.033 "traddr": "10.0.0.1", 00:10:08.033 "trsvcid": "59476" 00:10:08.033 }, 00:10:08.033 "auth": { 00:10:08.033 "state": "completed", 00:10:08.033 "digest": "sha256", 00:10:08.033 "dhgroup": "ffdhe2048" 00:10:08.033 } 00:10:08.033 } 00:10:08.033 ]' 00:10:08.033 09:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.033 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.600 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:08.600 09:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.167 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.426 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.684 00:10:09.684 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:09.684 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:09.684 09:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.252 { 00:10:10.252 "cntlid": 17, 00:10:10.252 "qid": 0, 00:10:10.252 "state": "enabled", 00:10:10.252 "thread": "nvmf_tgt_poll_group_000", 00:10:10.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:10.252 "listen_address": { 00:10:10.252 "trtype": "TCP", 00:10:10.252 "adrfam": "IPv4", 00:10:10.252 "traddr": "10.0.0.3", 00:10:10.252 "trsvcid": "4420" 00:10:10.252 }, 00:10:10.252 "peer_address": { 00:10:10.252 "trtype": "TCP", 00:10:10.252 "adrfam": "IPv4", 00:10:10.252 "traddr": "10.0.0.1", 00:10:10.252 "trsvcid": "59498" 00:10:10.252 }, 00:10:10.252 "auth": { 00:10:10.252 "state": "completed", 00:10:10.252 "digest": "sha256", 00:10:10.252 "dhgroup": "ffdhe3072" 00:10:10.252 } 00:10:10.252 } 00:10:10.252 ]' 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.252 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.510 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:10.511 09:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:11.446 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.014 00:10:12.014 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.014 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.014 09:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.273 { 00:10:12.273 "cntlid": 19, 00:10:12.273 "qid": 0, 00:10:12.273 "state": "enabled", 00:10:12.273 "thread": "nvmf_tgt_poll_group_000", 00:10:12.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:12.273 "listen_address": { 00:10:12.273 "trtype": "TCP", 00:10:12.273 "adrfam": "IPv4", 00:10:12.273 "traddr": "10.0.0.3", 00:10:12.273 "trsvcid": "4420" 00:10:12.273 }, 00:10:12.273 "peer_address": { 00:10:12.273 "trtype": "TCP", 00:10:12.273 "adrfam": "IPv4", 00:10:12.273 "traddr": "10.0.0.1", 00:10:12.273 "trsvcid": "59522" 00:10:12.273 }, 00:10:12.273 "auth": { 00:10:12.273 "state": "completed", 00:10:12.273 "digest": "sha256", 00:10:12.273 "dhgroup": "ffdhe3072" 00:10:12.273 } 00:10:12.273 } 00:10:12.273 ]' 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.273 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:12.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:12.532 09:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.467 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.725 00:10:13.725 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:13.725 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.725 09:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.292 { 00:10:14.292 "cntlid": 21, 00:10:14.292 "qid": 0, 00:10:14.292 "state": "enabled", 00:10:14.292 "thread": "nvmf_tgt_poll_group_000", 00:10:14.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:14.292 "listen_address": { 00:10:14.292 "trtype": "TCP", 00:10:14.292 "adrfam": "IPv4", 00:10:14.292 "traddr": "10.0.0.3", 00:10:14.292 "trsvcid": "4420" 00:10:14.292 }, 00:10:14.292 "peer_address": { 00:10:14.292 "trtype": "TCP", 00:10:14.292 "adrfam": "IPv4", 00:10:14.292 "traddr": "10.0.0.1", 00:10:14.292 "trsvcid": "45352" 00:10:14.292 }, 00:10:14.292 "auth": { 00:10:14.292 "state": "completed", 00:10:14.292 "digest": "sha256", 00:10:14.292 "dhgroup": "ffdhe3072" 00:10:14.292 } 00:10:14.292 } 00:10:14.292 ]' 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:14.292 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:14.553 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:14.553 09:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:15.496 09:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.063 00:10:16.063 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.063 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.063 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.322 { 00:10:16.322 "cntlid": 23, 00:10:16.322 "qid": 0, 00:10:16.322 "state": "enabled", 00:10:16.322 "thread": "nvmf_tgt_poll_group_000", 00:10:16.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:16.322 "listen_address": { 00:10:16.322 "trtype": "TCP", 00:10:16.322 "adrfam": "IPv4", 00:10:16.322 "traddr": "10.0.0.3", 00:10:16.322 "trsvcid": "4420" 00:10:16.322 }, 00:10:16.322 "peer_address": { 00:10:16.322 "trtype": "TCP", 00:10:16.322 "adrfam": "IPv4", 00:10:16.322 "traddr": "10.0.0.1", 00:10:16.322 "trsvcid": "45374" 00:10:16.322 }, 00:10:16.322 "auth": { 00:10:16.322 "state": "completed", 00:10:16.322 "digest": "sha256", 00:10:16.322 "dhgroup": "ffdhe3072" 00:10:16.322 } 00:10:16.322 } 00:10:16.322 ]' 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:16.322 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.581 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.581 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.581 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:16.846 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:16.846 09:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.412 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.671 09:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.930 00:10:17.930 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.930 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.930 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.189 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.189 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.189 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.189 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.448 { 00:10:18.448 "cntlid": 25, 00:10:18.448 "qid": 0, 00:10:18.448 "state": "enabled", 00:10:18.448 "thread": "nvmf_tgt_poll_group_000", 00:10:18.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:18.448 "listen_address": { 00:10:18.448 "trtype": "TCP", 00:10:18.448 "adrfam": "IPv4", 00:10:18.448 "traddr": "10.0.0.3", 00:10:18.448 "trsvcid": "4420" 00:10:18.448 }, 00:10:18.448 "peer_address": { 00:10:18.448 "trtype": "TCP", 00:10:18.448 "adrfam": "IPv4", 00:10:18.448 "traddr": "10.0.0.1", 00:10:18.448 "trsvcid": "45398" 00:10:18.448 }, 00:10:18.448 "auth": { 00:10:18.448 "state": "completed", 00:10:18.448 "digest": "sha256", 00:10:18.448 "dhgroup": "ffdhe4096" 00:10:18.448 } 00:10:18.448 } 00:10:18.448 ]' 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.448 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.706 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:18.706 09:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.274 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:19.842 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:19.842 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.842 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.842 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.843 09:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.102 00:10:20.102 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.102 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.102 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.362 { 00:10:20.362 "cntlid": 27, 00:10:20.362 "qid": 0, 00:10:20.362 "state": "enabled", 00:10:20.362 "thread": "nvmf_tgt_poll_group_000", 00:10:20.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:20.362 "listen_address": { 00:10:20.362 "trtype": "TCP", 00:10:20.362 "adrfam": "IPv4", 00:10:20.362 "traddr": "10.0.0.3", 00:10:20.362 "trsvcid": "4420" 00:10:20.362 }, 00:10:20.362 "peer_address": { 00:10:20.362 "trtype": "TCP", 00:10:20.362 "adrfam": "IPv4", 00:10:20.362 "traddr": "10.0.0.1", 00:10:20.362 "trsvcid": "45438" 00:10:20.362 }, 00:10:20.362 "auth": { 00:10:20.362 "state": "completed", 00:10:20.362 "digest": "sha256", 00:10:20.362 "dhgroup": "ffdhe4096" 00:10:20.362 } 00:10:20.362 } 00:10:20.362 ]' 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.362 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.621 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:20.621 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.621 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.621 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.621 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.880 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:20.880 09:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.449 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.709 09:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.970 00:10:21.971 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:21.971 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.971 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.230 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.230 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.230 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.230 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.488 { 00:10:22.488 "cntlid": 29, 00:10:22.488 "qid": 0, 00:10:22.488 "state": "enabled", 00:10:22.488 "thread": "nvmf_tgt_poll_group_000", 00:10:22.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:22.488 "listen_address": { 00:10:22.488 "trtype": "TCP", 00:10:22.488 "adrfam": "IPv4", 00:10:22.488 "traddr": "10.0.0.3", 00:10:22.488 "trsvcid": "4420" 00:10:22.488 }, 00:10:22.488 "peer_address": { 00:10:22.488 "trtype": "TCP", 00:10:22.488 "adrfam": "IPv4", 00:10:22.488 "traddr": "10.0.0.1", 00:10:22.488 "trsvcid": "45464" 00:10:22.488 }, 00:10:22.488 "auth": { 00:10:22.488 "state": "completed", 00:10:22.488 "digest": "sha256", 00:10:22.488 "dhgroup": "ffdhe4096" 00:10:22.488 } 00:10:22.488 } 00:10:22.488 ]' 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.488 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.748 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:22.748 09:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:23.316 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:23.575 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.835 09:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:24.094 00:10:24.094 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.094 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.094 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.663 { 00:10:24.663 "cntlid": 31, 00:10:24.663 "qid": 0, 00:10:24.663 "state": "enabled", 00:10:24.663 "thread": "nvmf_tgt_poll_group_000", 00:10:24.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:24.663 "listen_address": { 00:10:24.663 "trtype": "TCP", 00:10:24.663 "adrfam": "IPv4", 00:10:24.663 "traddr": "10.0.0.3", 00:10:24.663 "trsvcid": "4420" 00:10:24.663 }, 00:10:24.663 "peer_address": { 00:10:24.663 "trtype": "TCP", 00:10:24.663 "adrfam": "IPv4", 00:10:24.663 "traddr": "10.0.0.1", 00:10:24.663 "trsvcid": "37420" 00:10:24.663 }, 00:10:24.663 "auth": { 00:10:24.663 "state": "completed", 00:10:24.663 "digest": "sha256", 00:10:24.663 "dhgroup": "ffdhe4096" 00:10:24.663 } 00:10:24.663 } 00:10:24.663 ]' 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.663 09:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.922 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:24.922 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:25.859 09:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.118 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:26.377 00:10:26.637 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.637 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.637 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.896 { 00:10:26.896 "cntlid": 33, 00:10:26.896 "qid": 0, 00:10:26.896 "state": "enabled", 00:10:26.896 "thread": "nvmf_tgt_poll_group_000", 00:10:26.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:26.896 "listen_address": { 00:10:26.896 "trtype": "TCP", 00:10:26.896 "adrfam": "IPv4", 00:10:26.896 "traddr": "10.0.0.3", 00:10:26.896 "trsvcid": "4420" 00:10:26.896 }, 00:10:26.896 "peer_address": { 00:10:26.896 "trtype": "TCP", 00:10:26.896 "adrfam": "IPv4", 00:10:26.896 "traddr": "10.0.0.1", 00:10:26.896 "trsvcid": "37438" 00:10:26.896 }, 00:10:26.896 "auth": { 00:10:26.896 "state": "completed", 00:10:26.896 "digest": "sha256", 00:10:26.896 "dhgroup": "ffdhe6144" 00:10:26.896 } 00:10:26.896 } 00:10:26.896 ]' 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.896 09:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.155 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:27.155 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:27.721 09:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.288 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:28.547 00:10:28.805 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.805 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.805 10:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.064 { 00:10:29.064 "cntlid": 35, 00:10:29.064 "qid": 0, 00:10:29.064 "state": "enabled", 00:10:29.064 "thread": "nvmf_tgt_poll_group_000", 00:10:29.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:29.064 "listen_address": { 00:10:29.064 "trtype": "TCP", 00:10:29.064 "adrfam": "IPv4", 00:10:29.064 "traddr": "10.0.0.3", 00:10:29.064 "trsvcid": "4420" 00:10:29.064 }, 00:10:29.064 "peer_address": { 00:10:29.064 "trtype": "TCP", 00:10:29.064 "adrfam": "IPv4", 00:10:29.064 "traddr": "10.0.0.1", 00:10:29.064 "trsvcid": "37466" 00:10:29.064 }, 00:10:29.064 "auth": { 00:10:29.064 "state": "completed", 00:10:29.064 "digest": "sha256", 00:10:29.064 "dhgroup": "ffdhe6144" 00:10:29.064 } 00:10:29.064 } 00:10:29.064 ]' 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.064 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.630 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:29.630 10:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:30.196 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:30.455 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:31.022 00:10:31.022 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.022 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.022 10:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.280 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.280 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.280 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.280 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.280 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.280 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.280 { 00:10:31.280 "cntlid": 37, 00:10:31.280 "qid": 0, 00:10:31.280 "state": "enabled", 00:10:31.280 "thread": "nvmf_tgt_poll_group_000", 00:10:31.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:31.280 "listen_address": { 00:10:31.280 "trtype": "TCP", 00:10:31.280 "adrfam": "IPv4", 00:10:31.280 "traddr": "10.0.0.3", 00:10:31.280 "trsvcid": "4420" 00:10:31.280 }, 00:10:31.280 "peer_address": { 00:10:31.281 "trtype": "TCP", 00:10:31.281 "adrfam": "IPv4", 00:10:31.281 "traddr": "10.0.0.1", 00:10:31.281 "trsvcid": "37482" 00:10:31.281 }, 00:10:31.281 "auth": { 00:10:31.281 "state": "completed", 00:10:31.281 "digest": "sha256", 00:10:31.281 "dhgroup": "ffdhe6144" 00:10:31.281 } 00:10:31.281 } 00:10:31.281 ]' 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.281 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.539 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:31.539 10:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:32.475 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:32.476 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:32.735 10:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:32.993 00:10:32.993 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.993 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.993 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.252 { 00:10:33.252 "cntlid": 39, 00:10:33.252 "qid": 0, 00:10:33.252 "state": "enabled", 00:10:33.252 "thread": "nvmf_tgt_poll_group_000", 00:10:33.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:33.252 "listen_address": { 00:10:33.252 "trtype": "TCP", 00:10:33.252 "adrfam": "IPv4", 00:10:33.252 "traddr": "10.0.0.3", 00:10:33.252 "trsvcid": "4420" 00:10:33.252 }, 00:10:33.252 "peer_address": { 00:10:33.252 "trtype": "TCP", 00:10:33.252 "adrfam": "IPv4", 00:10:33.252 "traddr": "10.0.0.1", 00:10:33.252 "trsvcid": "37514" 00:10:33.252 }, 00:10:33.252 "auth": { 00:10:33.252 "state": "completed", 00:10:33.252 "digest": "sha256", 00:10:33.252 "dhgroup": "ffdhe6144" 00:10:33.252 } 00:10:33.252 } 00:10:33.252 ]' 00:10:33.252 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.511 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.770 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:33.770 10:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:34.717 10:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:35.654 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.654 { 00:10:35.654 "cntlid": 41, 00:10:35.654 "qid": 0, 00:10:35.654 "state": "enabled", 00:10:35.654 "thread": "nvmf_tgt_poll_group_000", 00:10:35.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:35.654 "listen_address": { 00:10:35.654 "trtype": "TCP", 00:10:35.654 "adrfam": "IPv4", 00:10:35.654 "traddr": "10.0.0.3", 00:10:35.654 "trsvcid": "4420" 00:10:35.654 }, 00:10:35.654 "peer_address": { 00:10:35.654 "trtype": "TCP", 00:10:35.654 "adrfam": "IPv4", 00:10:35.654 "traddr": "10.0.0.1", 00:10:35.654 "trsvcid": "55584" 00:10:35.654 }, 00:10:35.654 "auth": { 00:10:35.654 "state": "completed", 00:10:35.654 "digest": "sha256", 00:10:35.654 "dhgroup": "ffdhe8192" 00:10:35.654 } 00:10:35.654 } 00:10:35.654 ]' 00:10:35.654 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.912 10:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.170 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:36.170 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:37.106 10:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.106 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:37.107 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.107 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.107 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.107 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.107 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:37.107 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.365 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.366 10:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:37.932 00:10:37.932 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.932 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.932 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.190 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.190 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.190 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.190 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.483 { 00:10:38.483 "cntlid": 43, 00:10:38.483 "qid": 0, 00:10:38.483 "state": "enabled", 00:10:38.483 "thread": "nvmf_tgt_poll_group_000", 00:10:38.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:38.483 "listen_address": { 00:10:38.483 "trtype": "TCP", 00:10:38.483 "adrfam": "IPv4", 00:10:38.483 "traddr": "10.0.0.3", 00:10:38.483 "trsvcid": "4420" 00:10:38.483 }, 00:10:38.483 "peer_address": { 00:10:38.483 "trtype": "TCP", 00:10:38.483 "adrfam": "IPv4", 00:10:38.483 "traddr": "10.0.0.1", 00:10:38.483 "trsvcid": "55614" 00:10:38.483 }, 00:10:38.483 "auth": { 00:10:38.483 "state": "completed", 00:10:38.483 "digest": "sha256", 00:10:38.483 "dhgroup": "ffdhe8192" 00:10:38.483 } 00:10:38.483 } 00:10:38.483 ]' 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.483 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.740 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:38.740 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:39.674 10:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.642 00:10:40.642 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.642 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.642 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.901 { 00:10:40.901 "cntlid": 45, 00:10:40.901 "qid": 0, 00:10:40.901 "state": "enabled", 00:10:40.901 "thread": "nvmf_tgt_poll_group_000", 00:10:40.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:40.901 "listen_address": { 00:10:40.901 "trtype": "TCP", 00:10:40.901 "adrfam": "IPv4", 00:10:40.901 "traddr": "10.0.0.3", 00:10:40.901 "trsvcid": "4420" 00:10:40.901 }, 00:10:40.901 "peer_address": { 00:10:40.901 "trtype": "TCP", 00:10:40.901 "adrfam": "IPv4", 00:10:40.901 "traddr": "10.0.0.1", 00:10:40.901 "trsvcid": "55644" 00:10:40.901 }, 00:10:40.901 "auth": { 00:10:40.901 "state": "completed", 00:10:40.901 "digest": "sha256", 00:10:40.901 "dhgroup": "ffdhe8192" 00:10:40.901 } 00:10:40.901 } 00:10:40.901 ]' 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.901 10:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.160 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:41.160 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:42.095 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:42.095 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:43.031 00:10:43.031 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.031 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.031 10:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.031 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.031 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.031 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.031 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.290 { 00:10:43.290 "cntlid": 47, 00:10:43.290 "qid": 0, 00:10:43.290 "state": "enabled", 00:10:43.290 "thread": "nvmf_tgt_poll_group_000", 00:10:43.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:43.290 "listen_address": { 00:10:43.290 "trtype": "TCP", 00:10:43.290 "adrfam": "IPv4", 00:10:43.290 "traddr": "10.0.0.3", 00:10:43.290 "trsvcid": "4420" 00:10:43.290 }, 00:10:43.290 "peer_address": { 00:10:43.290 "trtype": "TCP", 00:10:43.290 "adrfam": "IPv4", 00:10:43.290 "traddr": "10.0.0.1", 00:10:43.290 "trsvcid": "55662" 00:10:43.290 }, 00:10:43.290 "auth": { 00:10:43.290 "state": "completed", 00:10:43.290 "digest": "sha256", 00:10:43.290 "dhgroup": "ffdhe8192" 00:10:43.290 } 00:10:43.290 } 00:10:43.290 ]' 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.290 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.550 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:43.550 10:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:44.521 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.781 10:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.039 00:10:45.039 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.039 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.040 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.299 { 00:10:45.299 "cntlid": 49, 00:10:45.299 "qid": 0, 00:10:45.299 "state": "enabled", 00:10:45.299 "thread": "nvmf_tgt_poll_group_000", 00:10:45.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:45.299 "listen_address": { 00:10:45.299 "trtype": "TCP", 00:10:45.299 "adrfam": "IPv4", 00:10:45.299 "traddr": "10.0.0.3", 00:10:45.299 "trsvcid": "4420" 00:10:45.299 }, 00:10:45.299 "peer_address": { 00:10:45.299 "trtype": "TCP", 00:10:45.299 "adrfam": "IPv4", 00:10:45.299 "traddr": "10.0.0.1", 00:10:45.299 "trsvcid": "56532" 00:10:45.299 }, 00:10:45.299 "auth": { 00:10:45.299 "state": "completed", 00:10:45.299 "digest": "sha384", 00:10:45.299 "dhgroup": "null" 00:10:45.299 } 00:10:45.299 } 00:10:45.299 ]' 00:10:45.299 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.557 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.816 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:45.816 10:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.759 10:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.327 00:10:47.327 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.327 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.327 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.585 { 00:10:47.585 "cntlid": 51, 00:10:47.585 "qid": 0, 00:10:47.585 "state": "enabled", 00:10:47.585 "thread": "nvmf_tgt_poll_group_000", 00:10:47.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:47.585 "listen_address": { 00:10:47.585 "trtype": "TCP", 00:10:47.585 "adrfam": "IPv4", 00:10:47.585 "traddr": "10.0.0.3", 00:10:47.585 "trsvcid": "4420" 00:10:47.585 }, 00:10:47.585 "peer_address": { 00:10:47.585 "trtype": "TCP", 00:10:47.585 "adrfam": "IPv4", 00:10:47.585 "traddr": "10.0.0.1", 00:10:47.585 "trsvcid": "56552" 00:10:47.585 }, 00:10:47.585 "auth": { 00:10:47.585 "state": "completed", 00:10:47.585 "digest": "sha384", 00:10:47.585 "dhgroup": "null" 00:10:47.585 } 00:10:47.585 } 00:10:47.585 ]' 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.585 10:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.153 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:48.153 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:48.722 10:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.981 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.240 00:10:49.240 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.240 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.240 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.499 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.499 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.499 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.499 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.758 { 00:10:49.758 "cntlid": 53, 00:10:49.758 "qid": 0, 00:10:49.758 "state": "enabled", 00:10:49.758 "thread": "nvmf_tgt_poll_group_000", 00:10:49.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:49.758 "listen_address": { 00:10:49.758 "trtype": "TCP", 00:10:49.758 "adrfam": "IPv4", 00:10:49.758 "traddr": "10.0.0.3", 00:10:49.758 "trsvcid": "4420" 00:10:49.758 }, 00:10:49.758 "peer_address": { 00:10:49.758 "trtype": "TCP", 00:10:49.758 "adrfam": "IPv4", 00:10:49.758 "traddr": "10.0.0.1", 00:10:49.758 "trsvcid": "56576" 00:10:49.758 }, 00:10:49.758 "auth": { 00:10:49.758 "state": "completed", 00:10:49.758 "digest": "sha384", 00:10:49.758 "dhgroup": "null" 00:10:49.758 } 00:10:49.758 } 00:10:49.758 ]' 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.758 10:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.016 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:50.016 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:50.584 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:50.843 10:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.101 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.360 00:10:51.360 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.360 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.360 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.647 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.647 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.647 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.647 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.647 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.647 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.647 { 00:10:51.647 "cntlid": 55, 00:10:51.647 "qid": 0, 00:10:51.647 "state": "enabled", 00:10:51.647 "thread": "nvmf_tgt_poll_group_000", 00:10:51.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:51.647 "listen_address": { 00:10:51.647 "trtype": "TCP", 00:10:51.647 "adrfam": "IPv4", 00:10:51.647 "traddr": "10.0.0.3", 00:10:51.647 "trsvcid": "4420" 00:10:51.647 }, 00:10:51.647 "peer_address": { 00:10:51.647 "trtype": "TCP", 00:10:51.647 "adrfam": "IPv4", 00:10:51.647 "traddr": "10.0.0.1", 00:10:51.648 "trsvcid": "56590" 00:10:51.648 }, 00:10:51.648 "auth": { 00:10:51.648 "state": "completed", 00:10:51.648 "digest": "sha384", 00:10:51.648 "dhgroup": "null" 00:10:51.648 } 00:10:51.648 } 00:10:51.648 ]' 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.648 10:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.234 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:52.234 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:52.801 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.060 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.628 00:10:53.628 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.628 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.628 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.887 { 00:10:53.887 "cntlid": 57, 00:10:53.887 "qid": 0, 00:10:53.887 "state": "enabled", 00:10:53.887 "thread": "nvmf_tgt_poll_group_000", 00:10:53.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:53.887 "listen_address": { 00:10:53.887 "trtype": "TCP", 00:10:53.887 "adrfam": "IPv4", 00:10:53.887 "traddr": "10.0.0.3", 00:10:53.887 "trsvcid": "4420" 00:10:53.887 }, 00:10:53.887 "peer_address": { 00:10:53.887 "trtype": "TCP", 00:10:53.887 "adrfam": "IPv4", 00:10:53.887 "traddr": "10.0.0.1", 00:10:53.887 "trsvcid": "45716" 00:10:53.887 }, 00:10:53.887 "auth": { 00:10:53.887 "state": "completed", 00:10:53.887 "digest": "sha384", 00:10:53.887 "dhgroup": "ffdhe2048" 00:10:53.887 } 00:10:53.887 } 00:10:53.887 ]' 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.887 10:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:53.887 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.887 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.887 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.887 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.454 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:54.454 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:55.022 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.282 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.542 00:10:55.542 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.542 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.542 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.117 { 00:10:56.117 "cntlid": 59, 00:10:56.117 "qid": 0, 00:10:56.117 "state": "enabled", 00:10:56.117 "thread": "nvmf_tgt_poll_group_000", 00:10:56.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:56.117 "listen_address": { 00:10:56.117 "trtype": "TCP", 00:10:56.117 "adrfam": "IPv4", 00:10:56.117 "traddr": "10.0.0.3", 00:10:56.117 "trsvcid": "4420" 00:10:56.117 }, 00:10:56.117 "peer_address": { 00:10:56.117 "trtype": "TCP", 00:10:56.117 "adrfam": "IPv4", 00:10:56.117 "traddr": "10.0.0.1", 00:10:56.117 "trsvcid": "45740" 00:10:56.117 }, 00:10:56.117 "auth": { 00:10:56.117 "state": "completed", 00:10:56.117 "digest": "sha384", 00:10:56.117 "dhgroup": "ffdhe2048" 00:10:56.117 } 00:10:56.117 } 00:10:56.117 ]' 00:10:56.117 10:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.117 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.404 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:56.404 10:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:10:57.339 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.340 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.913 00:10:57.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.913 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.172 { 00:10:58.172 "cntlid": 61, 00:10:58.172 "qid": 0, 00:10:58.172 "state": "enabled", 00:10:58.172 "thread": "nvmf_tgt_poll_group_000", 00:10:58.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:10:58.172 "listen_address": { 00:10:58.172 "trtype": "TCP", 00:10:58.172 "adrfam": "IPv4", 00:10:58.172 "traddr": "10.0.0.3", 00:10:58.172 "trsvcid": "4420" 00:10:58.172 }, 00:10:58.172 "peer_address": { 00:10:58.172 "trtype": "TCP", 00:10:58.172 "adrfam": "IPv4", 00:10:58.172 "traddr": "10.0.0.1", 00:10:58.172 "trsvcid": "45752" 00:10:58.172 }, 00:10:58.172 "auth": { 00:10:58.172 "state": "completed", 00:10:58.172 "digest": "sha384", 00:10:58.172 "dhgroup": "ffdhe2048" 00:10:58.172 } 00:10:58.172 } 00:10:58.172 ]' 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:58.172 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.431 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.431 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.431 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.690 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:58.690 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.257 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:59.824 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.825 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:00.083 00:11:00.083 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.083 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.083 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.342 { 00:11:00.342 "cntlid": 63, 00:11:00.342 "qid": 0, 00:11:00.342 "state": "enabled", 00:11:00.342 "thread": "nvmf_tgt_poll_group_000", 00:11:00.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:00.342 "listen_address": { 00:11:00.342 "trtype": "TCP", 00:11:00.342 "adrfam": "IPv4", 00:11:00.342 "traddr": "10.0.0.3", 00:11:00.342 "trsvcid": "4420" 00:11:00.342 }, 00:11:00.342 "peer_address": { 00:11:00.342 "trtype": "TCP", 00:11:00.342 "adrfam": "IPv4", 00:11:00.342 "traddr": "10.0.0.1", 00:11:00.342 "trsvcid": "45782" 00:11:00.342 }, 00:11:00.342 "auth": { 00:11:00.342 "state": "completed", 00:11:00.342 "digest": "sha384", 00:11:00.342 "dhgroup": "ffdhe2048" 00:11:00.342 } 00:11:00.342 } 00:11:00.342 ]' 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:00.342 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.601 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.601 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.601 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.860 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:00.861 10:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:01.428 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:01.429 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.688 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.947 00:11:02.235 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.236 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.236 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.494 { 00:11:02.494 "cntlid": 65, 00:11:02.494 "qid": 0, 00:11:02.494 "state": "enabled", 00:11:02.494 "thread": "nvmf_tgt_poll_group_000", 00:11:02.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:02.494 "listen_address": { 00:11:02.494 "trtype": "TCP", 00:11:02.494 "adrfam": "IPv4", 00:11:02.494 "traddr": "10.0.0.3", 00:11:02.494 "trsvcid": "4420" 00:11:02.494 }, 00:11:02.494 "peer_address": { 00:11:02.494 "trtype": "TCP", 00:11:02.494 "adrfam": "IPv4", 00:11:02.494 "traddr": "10.0.0.1", 00:11:02.494 "trsvcid": "45800" 00:11:02.494 }, 00:11:02.494 "auth": { 00:11:02.494 "state": "completed", 00:11:02.494 "digest": "sha384", 00:11:02.494 "dhgroup": "ffdhe3072" 00:11:02.494 } 00:11:02.494 } 00:11:02.494 ]' 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.494 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.753 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:02.753 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:03.321 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.580 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.147 00:11:04.148 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.148 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.148 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.407 { 00:11:04.407 "cntlid": 67, 00:11:04.407 "qid": 0, 00:11:04.407 "state": "enabled", 00:11:04.407 "thread": "nvmf_tgt_poll_group_000", 00:11:04.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:04.407 "listen_address": { 00:11:04.407 "trtype": "TCP", 00:11:04.407 "adrfam": "IPv4", 00:11:04.407 "traddr": "10.0.0.3", 00:11:04.407 "trsvcid": "4420" 00:11:04.407 }, 00:11:04.407 "peer_address": { 00:11:04.407 "trtype": "TCP", 00:11:04.407 "adrfam": "IPv4", 00:11:04.407 "traddr": "10.0.0.1", 00:11:04.407 "trsvcid": "44372" 00:11:04.407 }, 00:11:04.407 "auth": { 00:11:04.407 "state": "completed", 00:11:04.407 "digest": "sha384", 00:11:04.407 "dhgroup": "ffdhe3072" 00:11:04.407 } 00:11:04.407 } 00:11:04.407 ]' 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.407 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.665 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:04.665 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:05.602 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.861 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.862 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:06.120 00:11:06.120 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.120 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:06.120 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:06.380 { 00:11:06.380 "cntlid": 69, 00:11:06.380 "qid": 0, 00:11:06.380 "state": "enabled", 00:11:06.380 "thread": "nvmf_tgt_poll_group_000", 00:11:06.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:06.380 "listen_address": { 00:11:06.380 "trtype": "TCP", 00:11:06.380 "adrfam": "IPv4", 00:11:06.380 "traddr": "10.0.0.3", 00:11:06.380 "trsvcid": "4420" 00:11:06.380 }, 00:11:06.380 "peer_address": { 00:11:06.380 "trtype": "TCP", 00:11:06.380 "adrfam": "IPv4", 00:11:06.380 "traddr": "10.0.0.1", 00:11:06.380 "trsvcid": "44404" 00:11:06.380 }, 00:11:06.380 "auth": { 00:11:06.380 "state": "completed", 00:11:06.380 "digest": "sha384", 00:11:06.380 "dhgroup": "ffdhe3072" 00:11:06.380 } 00:11:06.380 } 00:11:06.380 ]' 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:06.380 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.638 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.638 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.638 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.897 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:06.897 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:07.465 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.724 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:07.982 00:11:07.982 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.982 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.982 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.241 { 00:11:08.241 "cntlid": 71, 00:11:08.241 "qid": 0, 00:11:08.241 "state": "enabled", 00:11:08.241 "thread": "nvmf_tgt_poll_group_000", 00:11:08.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:08.241 "listen_address": { 00:11:08.241 "trtype": "TCP", 00:11:08.241 "adrfam": "IPv4", 00:11:08.241 "traddr": "10.0.0.3", 00:11:08.241 "trsvcid": "4420" 00:11:08.241 }, 00:11:08.241 "peer_address": { 00:11:08.241 "trtype": "TCP", 00:11:08.241 "adrfam": "IPv4", 00:11:08.241 "traddr": "10.0.0.1", 00:11:08.241 "trsvcid": "44438" 00:11:08.241 }, 00:11:08.241 "auth": { 00:11:08.241 "state": "completed", 00:11:08.241 "digest": "sha384", 00:11:08.241 "dhgroup": "ffdhe3072" 00:11:08.241 } 00:11:08.241 } 00:11:08.241 ]' 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:08.241 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.500 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.500 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.500 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.759 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:08.759 10:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:09.326 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.326 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:09.326 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.327 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.327 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.327 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:09.327 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.327 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:09.327 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.586 10:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.845 00:11:10.104 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.104 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.104 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.363 { 00:11:10.363 "cntlid": 73, 00:11:10.363 "qid": 0, 00:11:10.363 "state": "enabled", 00:11:10.363 "thread": "nvmf_tgt_poll_group_000", 00:11:10.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:10.363 "listen_address": { 00:11:10.363 "trtype": "TCP", 00:11:10.363 "adrfam": "IPv4", 00:11:10.363 "traddr": "10.0.0.3", 00:11:10.363 "trsvcid": "4420" 00:11:10.363 }, 00:11:10.363 "peer_address": { 00:11:10.363 "trtype": "TCP", 00:11:10.363 "adrfam": "IPv4", 00:11:10.363 "traddr": "10.0.0.1", 00:11:10.363 "trsvcid": "44450" 00:11:10.363 }, 00:11:10.363 "auth": { 00:11:10.363 "state": "completed", 00:11:10.363 "digest": "sha384", 00:11:10.363 "dhgroup": "ffdhe4096" 00:11:10.363 } 00:11:10.363 } 00:11:10.363 ]' 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.363 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.622 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:10.622 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:11.556 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.815 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.073 00:11:12.073 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.073 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.073 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.331 { 00:11:12.331 "cntlid": 75, 00:11:12.331 "qid": 0, 00:11:12.331 "state": "enabled", 00:11:12.331 "thread": "nvmf_tgt_poll_group_000", 00:11:12.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:12.331 "listen_address": { 00:11:12.331 "trtype": "TCP", 00:11:12.331 "adrfam": "IPv4", 00:11:12.331 "traddr": "10.0.0.3", 00:11:12.331 "trsvcid": "4420" 00:11:12.331 }, 00:11:12.331 "peer_address": { 00:11:12.331 "trtype": "TCP", 00:11:12.331 "adrfam": "IPv4", 00:11:12.331 "traddr": "10.0.0.1", 00:11:12.331 "trsvcid": "44472" 00:11:12.331 }, 00:11:12.331 "auth": { 00:11:12.331 "state": "completed", 00:11:12.331 "digest": "sha384", 00:11:12.331 "dhgroup": "ffdhe4096" 00:11:12.331 } 00:11:12.331 } 00:11:12.331 ]' 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.331 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.590 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:12.590 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.590 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.590 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.590 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.858 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:12.858 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:13.424 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:13.683 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:13.683 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.684 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.942 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.942 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.942 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.942 10:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.201 00:11:14.201 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.201 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.201 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.460 { 00:11:14.460 "cntlid": 77, 00:11:14.460 "qid": 0, 00:11:14.460 "state": "enabled", 00:11:14.460 "thread": "nvmf_tgt_poll_group_000", 00:11:14.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:14.460 "listen_address": { 00:11:14.460 "trtype": "TCP", 00:11:14.460 "adrfam": "IPv4", 00:11:14.460 "traddr": "10.0.0.3", 00:11:14.460 "trsvcid": "4420" 00:11:14.460 }, 00:11:14.460 "peer_address": { 00:11:14.460 "trtype": "TCP", 00:11:14.460 "adrfam": "IPv4", 00:11:14.460 "traddr": "10.0.0.1", 00:11:14.460 "trsvcid": "38434" 00:11:14.460 }, 00:11:14.460 "auth": { 00:11:14.460 "state": "completed", 00:11:14.460 "digest": "sha384", 00:11:14.460 "dhgroup": "ffdhe4096" 00:11:14.460 } 00:11:14.460 } 00:11:14.460 ]' 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.460 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.718 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:14.718 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.718 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.718 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.718 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.976 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:14.976 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.544 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.803 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.371 00:11:16.371 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.371 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.371 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.630 { 00:11:16.630 "cntlid": 79, 00:11:16.630 "qid": 0, 00:11:16.630 "state": "enabled", 00:11:16.630 "thread": "nvmf_tgt_poll_group_000", 00:11:16.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:16.630 "listen_address": { 00:11:16.630 "trtype": "TCP", 00:11:16.630 "adrfam": "IPv4", 00:11:16.630 "traddr": "10.0.0.3", 00:11:16.630 "trsvcid": "4420" 00:11:16.630 }, 00:11:16.630 "peer_address": { 00:11:16.630 "trtype": "TCP", 00:11:16.630 "adrfam": "IPv4", 00:11:16.630 "traddr": "10.0.0.1", 00:11:16.630 "trsvcid": "38478" 00:11:16.630 }, 00:11:16.630 "auth": { 00:11:16.630 "state": "completed", 00:11:16.630 "digest": "sha384", 00:11:16.630 "dhgroup": "ffdhe4096" 00:11:16.630 } 00:11:16.630 } 00:11:16.630 ]' 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.630 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.888 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:16.888 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.888 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.888 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.888 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.146 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:17.147 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:17.712 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.013 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.580 00:11:18.580 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.580 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.580 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.838 { 00:11:18.838 "cntlid": 81, 00:11:18.838 "qid": 0, 00:11:18.838 "state": "enabled", 00:11:18.838 "thread": "nvmf_tgt_poll_group_000", 00:11:18.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:18.838 "listen_address": { 00:11:18.838 "trtype": "TCP", 00:11:18.838 "adrfam": "IPv4", 00:11:18.838 "traddr": "10.0.0.3", 00:11:18.838 "trsvcid": "4420" 00:11:18.838 }, 00:11:18.838 "peer_address": { 00:11:18.838 "trtype": "TCP", 00:11:18.838 "adrfam": "IPv4", 00:11:18.838 "traddr": "10.0.0.1", 00:11:18.838 "trsvcid": "38516" 00:11:18.838 }, 00:11:18.838 "auth": { 00:11:18.838 "state": "completed", 00:11:18.838 "digest": "sha384", 00:11:18.838 "dhgroup": "ffdhe6144" 00:11:18.838 } 00:11:18.838 } 00:11:18.838 ]' 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.838 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.097 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:19.097 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.097 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.097 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.097 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.355 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:19.355 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:20.290 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.290 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:20.290 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.290 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.290 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.291 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.291 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:20.291 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.549 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.115 00:11:21.115 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.115 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.115 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.373 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.373 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.373 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.374 { 00:11:21.374 "cntlid": 83, 00:11:21.374 "qid": 0, 00:11:21.374 "state": "enabled", 00:11:21.374 "thread": "nvmf_tgt_poll_group_000", 00:11:21.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:21.374 "listen_address": { 00:11:21.374 "trtype": "TCP", 00:11:21.374 "adrfam": "IPv4", 00:11:21.374 "traddr": "10.0.0.3", 00:11:21.374 "trsvcid": "4420" 00:11:21.374 }, 00:11:21.374 "peer_address": { 00:11:21.374 "trtype": "TCP", 00:11:21.374 "adrfam": "IPv4", 00:11:21.374 "traddr": "10.0.0.1", 00:11:21.374 "trsvcid": "38538" 00:11:21.374 }, 00:11:21.374 "auth": { 00:11:21.374 "state": "completed", 00:11:21.374 "digest": "sha384", 00:11:21.374 "dhgroup": "ffdhe6144" 00:11:21.374 } 00:11:21.374 } 00:11:21.374 ]' 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:21.374 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.632 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.632 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.632 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.890 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:21.890 10:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:22.457 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.716 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.317 00:11:23.317 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.317 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.317 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.576 { 00:11:23.576 "cntlid": 85, 00:11:23.576 "qid": 0, 00:11:23.576 "state": "enabled", 00:11:23.576 "thread": "nvmf_tgt_poll_group_000", 00:11:23.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:23.576 "listen_address": { 00:11:23.576 "trtype": "TCP", 00:11:23.576 "adrfam": "IPv4", 00:11:23.576 "traddr": "10.0.0.3", 00:11:23.576 "trsvcid": "4420" 00:11:23.576 }, 00:11:23.576 "peer_address": { 00:11:23.576 "trtype": "TCP", 00:11:23.576 "adrfam": "IPv4", 00:11:23.576 "traddr": "10.0.0.1", 00:11:23.576 "trsvcid": "47386" 00:11:23.576 }, 00:11:23.576 "auth": { 00:11:23.576 "state": "completed", 00:11:23.576 "digest": "sha384", 00:11:23.576 "dhgroup": "ffdhe6144" 00:11:23.576 } 00:11:23.576 } 00:11:23.576 ]' 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:23.576 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.834 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.834 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.834 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.092 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:24.092 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:24.660 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.226 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.484 00:11:25.484 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.484 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.484 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.743 { 00:11:25.743 "cntlid": 87, 00:11:25.743 "qid": 0, 00:11:25.743 "state": "enabled", 00:11:25.743 "thread": "nvmf_tgt_poll_group_000", 00:11:25.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:25.743 "listen_address": { 00:11:25.743 "trtype": "TCP", 00:11:25.743 "adrfam": "IPv4", 00:11:25.743 "traddr": "10.0.0.3", 00:11:25.743 "trsvcid": "4420" 00:11:25.743 }, 00:11:25.743 "peer_address": { 00:11:25.743 "trtype": "TCP", 00:11:25.743 "adrfam": "IPv4", 00:11:25.743 "traddr": "10.0.0.1", 00:11:25.743 "trsvcid": "47418" 00:11:25.743 }, 00:11:25.743 "auth": { 00:11:25.743 "state": "completed", 00:11:25.743 "digest": "sha384", 00:11:25.743 "dhgroup": "ffdhe6144" 00:11:25.743 } 00:11:25.743 } 00:11:25.743 ]' 00:11:25.743 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.001 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.001 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.001 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:26.001 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.001 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.001 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.001 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.259 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:26.259 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.825 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.826 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:26.826 10:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.392 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.961 00:11:27.961 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.961 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.961 10:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.220 { 00:11:28.220 "cntlid": 89, 00:11:28.220 "qid": 0, 00:11:28.220 "state": "enabled", 00:11:28.220 "thread": "nvmf_tgt_poll_group_000", 00:11:28.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:28.220 "listen_address": { 00:11:28.220 "trtype": "TCP", 00:11:28.220 "adrfam": "IPv4", 00:11:28.220 "traddr": "10.0.0.3", 00:11:28.220 "trsvcid": "4420" 00:11:28.220 }, 00:11:28.220 "peer_address": { 00:11:28.220 "trtype": "TCP", 00:11:28.220 "adrfam": "IPv4", 00:11:28.220 "traddr": "10.0.0.1", 00:11:28.220 "trsvcid": "47446" 00:11:28.220 }, 00:11:28.220 "auth": { 00:11:28.220 "state": "completed", 00:11:28.220 "digest": "sha384", 00:11:28.220 "dhgroup": "ffdhe8192" 00:11:28.220 } 00:11:28.220 } 00:11:28.220 ]' 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:28.220 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.478 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.478 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.478 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.736 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:28.736 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.670 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.612 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.612 { 00:11:30.612 "cntlid": 91, 00:11:30.612 "qid": 0, 00:11:30.612 "state": "enabled", 00:11:30.612 "thread": "nvmf_tgt_poll_group_000", 00:11:30.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:30.612 "listen_address": { 00:11:30.612 "trtype": "TCP", 00:11:30.612 "adrfam": "IPv4", 00:11:30.612 "traddr": "10.0.0.3", 00:11:30.612 "trsvcid": "4420" 00:11:30.612 }, 00:11:30.612 "peer_address": { 00:11:30.612 "trtype": "TCP", 00:11:30.612 "adrfam": "IPv4", 00:11:30.612 "traddr": "10.0.0.1", 00:11:30.612 "trsvcid": "47478" 00:11:30.612 }, 00:11:30.612 "auth": { 00:11:30.612 "state": "completed", 00:11:30.612 "digest": "sha384", 00:11:30.612 "dhgroup": "ffdhe8192" 00:11:30.612 } 00:11:30.612 } 00:11:30.612 ]' 00:11:30.612 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.871 10:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.129 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:31.130 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:31.696 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.696 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:31.696 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.696 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.697 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.697 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.697 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:31.697 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.264 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.831 00:11:32.831 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.831 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.831 10:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.102 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.102 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.102 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.102 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.102 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.102 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.102 { 00:11:33.102 "cntlid": 93, 00:11:33.102 "qid": 0, 00:11:33.102 "state": "enabled", 00:11:33.102 "thread": "nvmf_tgt_poll_group_000", 00:11:33.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:33.102 "listen_address": { 00:11:33.102 "trtype": "TCP", 00:11:33.103 "adrfam": "IPv4", 00:11:33.103 "traddr": "10.0.0.3", 00:11:33.103 "trsvcid": "4420" 00:11:33.103 }, 00:11:33.103 "peer_address": { 00:11:33.103 "trtype": "TCP", 00:11:33.103 "adrfam": "IPv4", 00:11:33.103 "traddr": "10.0.0.1", 00:11:33.103 "trsvcid": "47498" 00:11:33.103 }, 00:11:33.103 "auth": { 00:11:33.103 "state": "completed", 00:11:33.103 "digest": "sha384", 00:11:33.103 "dhgroup": "ffdhe8192" 00:11:33.103 } 00:11:33.103 } 00:11:33.103 ]' 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.103 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.374 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:33.374 10:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:34.308 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:34.566 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:34.566 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.566 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:34.566 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:34.567 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.133 00:11:35.133 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.133 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.133 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.392 { 00:11:35.392 "cntlid": 95, 00:11:35.392 "qid": 0, 00:11:35.392 "state": "enabled", 00:11:35.392 "thread": "nvmf_tgt_poll_group_000", 00:11:35.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:35.392 "listen_address": { 00:11:35.392 "trtype": "TCP", 00:11:35.392 "adrfam": "IPv4", 00:11:35.392 "traddr": "10.0.0.3", 00:11:35.392 "trsvcid": "4420" 00:11:35.392 }, 00:11:35.392 "peer_address": { 00:11:35.392 "trtype": "TCP", 00:11:35.392 "adrfam": "IPv4", 00:11:35.392 "traddr": "10.0.0.1", 00:11:35.392 "trsvcid": "36942" 00:11:35.392 }, 00:11:35.392 "auth": { 00:11:35.392 "state": "completed", 00:11:35.392 "digest": "sha384", 00:11:35.392 "dhgroup": "ffdhe8192" 00:11:35.392 } 00:11:35.392 } 00:11:35.392 ]' 00:11:35.392 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.650 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.650 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.651 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:35.651 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.651 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.651 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.651 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.908 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:35.908 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:36.473 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:36.731 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.989 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.247 00:11:37.247 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.247 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.248 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.506 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.506 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.506 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.506 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.765 { 00:11:37.765 "cntlid": 97, 00:11:37.765 "qid": 0, 00:11:37.765 "state": "enabled", 00:11:37.765 "thread": "nvmf_tgt_poll_group_000", 00:11:37.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:37.765 "listen_address": { 00:11:37.765 "trtype": "TCP", 00:11:37.765 "adrfam": "IPv4", 00:11:37.765 "traddr": "10.0.0.3", 00:11:37.765 "trsvcid": "4420" 00:11:37.765 }, 00:11:37.765 "peer_address": { 00:11:37.765 "trtype": "TCP", 00:11:37.765 "adrfam": "IPv4", 00:11:37.765 "traddr": "10.0.0.1", 00:11:37.765 "trsvcid": "36956" 00:11:37.765 }, 00:11:37.765 "auth": { 00:11:37.765 "state": "completed", 00:11:37.765 "digest": "sha512", 00:11:37.765 "dhgroup": "null" 00:11:37.765 } 00:11:37.765 } 00:11:37.765 ]' 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.765 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.024 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:38.024 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.007 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.266 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.525 00:11:39.525 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.525 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.525 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.785 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.785 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.785 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.786 { 00:11:39.786 "cntlid": 99, 00:11:39.786 "qid": 0, 00:11:39.786 "state": "enabled", 00:11:39.786 "thread": "nvmf_tgt_poll_group_000", 00:11:39.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:39.786 "listen_address": { 00:11:39.786 "trtype": "TCP", 00:11:39.786 "adrfam": "IPv4", 00:11:39.786 "traddr": "10.0.0.3", 00:11:39.786 "trsvcid": "4420" 00:11:39.786 }, 00:11:39.786 "peer_address": { 00:11:39.786 "trtype": "TCP", 00:11:39.786 "adrfam": "IPv4", 00:11:39.786 "traddr": "10.0.0.1", 00:11:39.786 "trsvcid": "36988" 00:11:39.786 }, 00:11:39.786 "auth": { 00:11:39.786 "state": "completed", 00:11:39.786 "digest": "sha512", 00:11:39.786 "dhgroup": "null" 00:11:39.786 } 00:11:39.786 } 00:11:39.786 ]' 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:39.786 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.044 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.044 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.044 10:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.303 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:40.303 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:40.869 10:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.128 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.696 00:11:41.696 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.696 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.696 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.954 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.954 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.954 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.954 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.954 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.954 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.954 { 00:11:41.954 "cntlid": 101, 00:11:41.954 "qid": 0, 00:11:41.954 "state": "enabled", 00:11:41.954 "thread": "nvmf_tgt_poll_group_000", 00:11:41.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:41.954 "listen_address": { 00:11:41.954 "trtype": "TCP", 00:11:41.954 "adrfam": "IPv4", 00:11:41.955 "traddr": "10.0.0.3", 00:11:41.955 "trsvcid": "4420" 00:11:41.955 }, 00:11:41.955 "peer_address": { 00:11:41.955 "trtype": "TCP", 00:11:41.955 "adrfam": "IPv4", 00:11:41.955 "traddr": "10.0.0.1", 00:11:41.955 "trsvcid": "37016" 00:11:41.955 }, 00:11:41.955 "auth": { 00:11:41.955 "state": "completed", 00:11:41.955 "digest": "sha512", 00:11:41.955 "dhgroup": "null" 00:11:41.955 } 00:11:41.955 } 00:11:41.955 ]' 00:11:41.955 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.955 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.955 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.955 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:41.955 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.955 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.955 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.955 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.521 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:42.521 10:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:43.088 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:43.346 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:43.346 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.347 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.605 00:11:43.605 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.605 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.605 10:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.864 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.864 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.864 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.864 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.129 { 00:11:44.129 "cntlid": 103, 00:11:44.129 "qid": 0, 00:11:44.129 "state": "enabled", 00:11:44.129 "thread": "nvmf_tgt_poll_group_000", 00:11:44.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:44.129 "listen_address": { 00:11:44.129 "trtype": "TCP", 00:11:44.129 "adrfam": "IPv4", 00:11:44.129 "traddr": "10.0.0.3", 00:11:44.129 "trsvcid": "4420" 00:11:44.129 }, 00:11:44.129 "peer_address": { 00:11:44.129 "trtype": "TCP", 00:11:44.129 "adrfam": "IPv4", 00:11:44.129 "traddr": "10.0.0.1", 00:11:44.129 "trsvcid": "37044" 00:11:44.129 }, 00:11:44.129 "auth": { 00:11:44.129 "state": "completed", 00:11:44.129 "digest": "sha512", 00:11:44.129 "dhgroup": "null" 00:11:44.129 } 00:11:44.129 } 00:11:44.129 ]' 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.129 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.399 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:44.399 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:44.965 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.531 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.790 00:11:45.790 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.790 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.790 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.049 { 00:11:46.049 "cntlid": 105, 00:11:46.049 "qid": 0, 00:11:46.049 "state": "enabled", 00:11:46.049 "thread": "nvmf_tgt_poll_group_000", 00:11:46.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:46.049 "listen_address": { 00:11:46.049 "trtype": "TCP", 00:11:46.049 "adrfam": "IPv4", 00:11:46.049 "traddr": "10.0.0.3", 00:11:46.049 "trsvcid": "4420" 00:11:46.049 }, 00:11:46.049 "peer_address": { 00:11:46.049 "trtype": "TCP", 00:11:46.049 "adrfam": "IPv4", 00:11:46.049 "traddr": "10.0.0.1", 00:11:46.049 "trsvcid": "37088" 00:11:46.049 }, 00:11:46.049 "auth": { 00:11:46.049 "state": "completed", 00:11:46.049 "digest": "sha512", 00:11:46.049 "dhgroup": "ffdhe2048" 00:11:46.049 } 00:11:46.049 } 00:11:46.049 ]' 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:46.049 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.308 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.308 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.308 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.566 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:46.566 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:47.133 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.133 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:47.133 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.133 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.133 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.133 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.134 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:47.134 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.393 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.652 00:11:47.911 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.911 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.911 10:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.170 { 00:11:48.170 "cntlid": 107, 00:11:48.170 "qid": 0, 00:11:48.170 "state": "enabled", 00:11:48.170 "thread": "nvmf_tgt_poll_group_000", 00:11:48.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:48.170 "listen_address": { 00:11:48.170 "trtype": "TCP", 00:11:48.170 "adrfam": "IPv4", 00:11:48.170 "traddr": "10.0.0.3", 00:11:48.170 "trsvcid": "4420" 00:11:48.170 }, 00:11:48.170 "peer_address": { 00:11:48.170 "trtype": "TCP", 00:11:48.170 "adrfam": "IPv4", 00:11:48.170 "traddr": "10.0.0.1", 00:11:48.170 "trsvcid": "37120" 00:11:48.170 }, 00:11:48.170 "auth": { 00:11:48.170 "state": "completed", 00:11:48.170 "digest": "sha512", 00:11:48.170 "dhgroup": "ffdhe2048" 00:11:48.170 } 00:11:48.170 } 00:11:48.170 ]' 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.170 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.429 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:48.429 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.364 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.622 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.881 00:11:49.881 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.881 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.881 10:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.142 { 00:11:50.142 "cntlid": 109, 00:11:50.142 "qid": 0, 00:11:50.142 "state": "enabled", 00:11:50.142 "thread": "nvmf_tgt_poll_group_000", 00:11:50.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:50.142 "listen_address": { 00:11:50.142 "trtype": "TCP", 00:11:50.142 "adrfam": "IPv4", 00:11:50.142 "traddr": "10.0.0.3", 00:11:50.142 "trsvcid": "4420" 00:11:50.142 }, 00:11:50.142 "peer_address": { 00:11:50.142 "trtype": "TCP", 00:11:50.142 "adrfam": "IPv4", 00:11:50.142 "traddr": "10.0.0.1", 00:11:50.142 "trsvcid": "37146" 00:11:50.142 }, 00:11:50.142 "auth": { 00:11:50.142 "state": "completed", 00:11:50.142 "digest": "sha512", 00:11:50.142 "dhgroup": "ffdhe2048" 00:11:50.142 } 00:11:50.142 } 00:11:50.142 ]' 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.142 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.401 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.401 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.401 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.401 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.401 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.659 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:50.659 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:51.226 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.485 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.743 00:11:51.743 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.743 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.743 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.311 { 00:11:52.311 "cntlid": 111, 00:11:52.311 "qid": 0, 00:11:52.311 "state": "enabled", 00:11:52.311 "thread": "nvmf_tgt_poll_group_000", 00:11:52.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:52.311 "listen_address": { 00:11:52.311 "trtype": "TCP", 00:11:52.311 "adrfam": "IPv4", 00:11:52.311 "traddr": "10.0.0.3", 00:11:52.311 "trsvcid": "4420" 00:11:52.311 }, 00:11:52.311 "peer_address": { 00:11:52.311 "trtype": "TCP", 00:11:52.311 "adrfam": "IPv4", 00:11:52.311 "traddr": "10.0.0.1", 00:11:52.311 "trsvcid": "37178" 00:11:52.311 }, 00:11:52.311 "auth": { 00:11:52.311 "state": "completed", 00:11:52.311 "digest": "sha512", 00:11:52.311 "dhgroup": "ffdhe2048" 00:11:52.311 } 00:11:52.311 } 00:11:52.311 ]' 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.311 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.569 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:52.569 10:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.504 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.505 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.505 10:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.072 00:11:54.072 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.072 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.072 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.330 { 00:11:54.330 "cntlid": 113, 00:11:54.330 "qid": 0, 00:11:54.330 "state": "enabled", 00:11:54.330 "thread": "nvmf_tgt_poll_group_000", 00:11:54.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:54.330 "listen_address": { 00:11:54.330 "trtype": "TCP", 00:11:54.330 "adrfam": "IPv4", 00:11:54.330 "traddr": "10.0.0.3", 00:11:54.330 "trsvcid": "4420" 00:11:54.330 }, 00:11:54.330 "peer_address": { 00:11:54.330 "trtype": "TCP", 00:11:54.330 "adrfam": "IPv4", 00:11:54.330 "traddr": "10.0.0.1", 00:11:54.330 "trsvcid": "32996" 00:11:54.330 }, 00:11:54.330 "auth": { 00:11:54.330 "state": "completed", 00:11:54.330 "digest": "sha512", 00:11:54.330 "dhgroup": "ffdhe3072" 00:11:54.330 } 00:11:54.330 } 00:11:54.330 ]' 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.330 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.897 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:54.897 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:55.465 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.723 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.291 00:11:56.291 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.291 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.291 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.550 { 00:11:56.550 "cntlid": 115, 00:11:56.550 "qid": 0, 00:11:56.550 "state": "enabled", 00:11:56.550 "thread": "nvmf_tgt_poll_group_000", 00:11:56.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:56.550 "listen_address": { 00:11:56.550 "trtype": "TCP", 00:11:56.550 "adrfam": "IPv4", 00:11:56.550 "traddr": "10.0.0.3", 00:11:56.550 "trsvcid": "4420" 00:11:56.550 }, 00:11:56.550 "peer_address": { 00:11:56.550 "trtype": "TCP", 00:11:56.550 "adrfam": "IPv4", 00:11:56.550 "traddr": "10.0.0.1", 00:11:56.550 "trsvcid": "33036" 00:11:56.550 }, 00:11:56.550 "auth": { 00:11:56.550 "state": "completed", 00:11:56.550 "digest": "sha512", 00:11:56.550 "dhgroup": "ffdhe3072" 00:11:56.550 } 00:11:56.550 } 00:11:56.550 ]' 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.550 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:56.809 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.809 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.809 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.809 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.067 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:57.067 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:57.634 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.893 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.894 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.894 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.894 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.894 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.465 00:11:58.465 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.465 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.465 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.725 { 00:11:58.725 "cntlid": 117, 00:11:58.725 "qid": 0, 00:11:58.725 "state": "enabled", 00:11:58.725 "thread": "nvmf_tgt_poll_group_000", 00:11:58.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:11:58.725 "listen_address": { 00:11:58.725 "trtype": "TCP", 00:11:58.725 "adrfam": "IPv4", 00:11:58.725 "traddr": "10.0.0.3", 00:11:58.725 "trsvcid": "4420" 00:11:58.725 }, 00:11:58.725 "peer_address": { 00:11:58.725 "trtype": "TCP", 00:11:58.725 "adrfam": "IPv4", 00:11:58.725 "traddr": "10.0.0.1", 00:11:58.725 "trsvcid": "33064" 00:11:58.725 }, 00:11:58.725 "auth": { 00:11:58.725 "state": "completed", 00:11:58.725 "digest": "sha512", 00:11:58.725 "dhgroup": "ffdhe3072" 00:11:58.725 } 00:11:58.725 } 00:11:58.725 ]' 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.725 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.984 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:58.984 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.932 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.932 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:00.500 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.500 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.500 { 00:12:00.500 "cntlid": 119, 00:12:00.500 "qid": 0, 00:12:00.500 "state": "enabled", 00:12:00.500 "thread": "nvmf_tgt_poll_group_000", 00:12:00.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:00.500 "listen_address": { 00:12:00.500 "trtype": "TCP", 00:12:00.500 "adrfam": "IPv4", 00:12:00.500 "traddr": "10.0.0.3", 00:12:00.500 "trsvcid": "4420" 00:12:00.500 }, 00:12:00.500 "peer_address": { 00:12:00.500 "trtype": "TCP", 00:12:00.500 "adrfam": "IPv4", 00:12:00.500 "traddr": "10.0.0.1", 00:12:00.500 "trsvcid": "33094" 00:12:00.500 }, 00:12:00.500 "auth": { 00:12:00.500 "state": "completed", 00:12:00.500 "digest": "sha512", 00:12:00.500 "dhgroup": "ffdhe3072" 00:12:00.500 } 00:12:00.500 } 00:12:00.500 ]' 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.760 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.019 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:01.019 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:01.587 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.155 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:02.414 00:12:02.414 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.414 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.414 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.673 { 00:12:02.673 "cntlid": 121, 00:12:02.673 "qid": 0, 00:12:02.673 "state": "enabled", 00:12:02.673 "thread": "nvmf_tgt_poll_group_000", 00:12:02.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:02.673 "listen_address": { 00:12:02.673 "trtype": "TCP", 00:12:02.673 "adrfam": "IPv4", 00:12:02.673 "traddr": "10.0.0.3", 00:12:02.673 "trsvcid": "4420" 00:12:02.673 }, 00:12:02.673 "peer_address": { 00:12:02.673 "trtype": "TCP", 00:12:02.673 "adrfam": "IPv4", 00:12:02.673 "traddr": "10.0.0.1", 00:12:02.673 "trsvcid": "33118" 00:12:02.673 }, 00:12:02.673 "auth": { 00:12:02.673 "state": "completed", 00:12:02.673 "digest": "sha512", 00:12:02.673 "dhgroup": "ffdhe4096" 00:12:02.673 } 00:12:02.673 } 00:12:02.673 ]' 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.673 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.674 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.674 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:02.674 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.674 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.674 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.674 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.242 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:03.242 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:03.809 10:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.080 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.352 00:12:04.611 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.611 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.611 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.870 { 00:12:04.870 "cntlid": 123, 00:12:04.870 "qid": 0, 00:12:04.870 "state": "enabled", 00:12:04.870 "thread": "nvmf_tgt_poll_group_000", 00:12:04.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:04.870 "listen_address": { 00:12:04.870 "trtype": "TCP", 00:12:04.870 "adrfam": "IPv4", 00:12:04.870 "traddr": "10.0.0.3", 00:12:04.870 "trsvcid": "4420" 00:12:04.870 }, 00:12:04.870 "peer_address": { 00:12:04.870 "trtype": "TCP", 00:12:04.870 "adrfam": "IPv4", 00:12:04.870 "traddr": "10.0.0.1", 00:12:04.870 "trsvcid": "41676" 00:12:04.870 }, 00:12:04.870 "auth": { 00:12:04.870 "state": "completed", 00:12:04.870 "digest": "sha512", 00:12:04.870 "dhgroup": "ffdhe4096" 00:12:04.870 } 00:12:04.870 } 00:12:04.870 ]' 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:04.870 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.870 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.870 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.870 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.438 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:12:05.438 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:12:06.006 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:06.006 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.265 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.523 00:12:06.523 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.523 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.523 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.090 { 00:12:07.090 "cntlid": 125, 00:12:07.090 "qid": 0, 00:12:07.090 "state": "enabled", 00:12:07.090 "thread": "nvmf_tgt_poll_group_000", 00:12:07.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:07.090 "listen_address": { 00:12:07.090 "trtype": "TCP", 00:12:07.090 "adrfam": "IPv4", 00:12:07.090 "traddr": "10.0.0.3", 00:12:07.090 "trsvcid": "4420" 00:12:07.090 }, 00:12:07.090 "peer_address": { 00:12:07.090 "trtype": "TCP", 00:12:07.090 "adrfam": "IPv4", 00:12:07.090 "traddr": "10.0.0.1", 00:12:07.090 "trsvcid": "41692" 00:12:07.090 }, 00:12:07.090 "auth": { 00:12:07.090 "state": "completed", 00:12:07.090 "digest": "sha512", 00:12:07.090 "dhgroup": "ffdhe4096" 00:12:07.090 } 00:12:07.090 } 00:12:07.090 ]' 00:12:07.090 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.090 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.348 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:12:07.349 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.283 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.855 00:12:08.855 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.855 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.855 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.123 { 00:12:09.123 "cntlid": 127, 00:12:09.123 "qid": 0, 00:12:09.123 "state": "enabled", 00:12:09.123 "thread": "nvmf_tgt_poll_group_000", 00:12:09.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:09.123 "listen_address": { 00:12:09.123 "trtype": "TCP", 00:12:09.123 "adrfam": "IPv4", 00:12:09.123 "traddr": "10.0.0.3", 00:12:09.123 "trsvcid": "4420" 00:12:09.123 }, 00:12:09.123 "peer_address": { 00:12:09.123 "trtype": "TCP", 00:12:09.123 "adrfam": "IPv4", 00:12:09.123 "traddr": "10.0.0.1", 00:12:09.123 "trsvcid": "41720" 00:12:09.123 }, 00:12:09.123 "auth": { 00:12:09.123 "state": "completed", 00:12:09.123 "digest": "sha512", 00:12:09.123 "dhgroup": "ffdhe4096" 00:12:09.123 } 00:12:09.123 } 00:12:09.123 ]' 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.123 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.383 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:09.383 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.318 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.884 00:12:10.884 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.884 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.884 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.142 { 00:12:11.142 "cntlid": 129, 00:12:11.142 "qid": 0, 00:12:11.142 "state": "enabled", 00:12:11.142 "thread": "nvmf_tgt_poll_group_000", 00:12:11.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:11.142 "listen_address": { 00:12:11.142 "trtype": "TCP", 00:12:11.142 "adrfam": "IPv4", 00:12:11.142 "traddr": "10.0.0.3", 00:12:11.142 "trsvcid": "4420" 00:12:11.142 }, 00:12:11.142 "peer_address": { 00:12:11.142 "trtype": "TCP", 00:12:11.142 "adrfam": "IPv4", 00:12:11.142 "traddr": "10.0.0.1", 00:12:11.142 "trsvcid": "41742" 00:12:11.142 }, 00:12:11.142 "auth": { 00:12:11.142 "state": "completed", 00:12:11.142 "digest": "sha512", 00:12:11.142 "dhgroup": "ffdhe6144" 00:12:11.142 } 00:12:11.142 } 00:12:11.142 ]' 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:11.142 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.400 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:11.400 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.400 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.400 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.400 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.658 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:11.658 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:12.225 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.792 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.359 00:12:13.359 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.359 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.359 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.617 { 00:12:13.617 "cntlid": 131, 00:12:13.617 "qid": 0, 00:12:13.617 "state": "enabled", 00:12:13.617 "thread": "nvmf_tgt_poll_group_000", 00:12:13.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:13.617 "listen_address": { 00:12:13.617 "trtype": "TCP", 00:12:13.617 "adrfam": "IPv4", 00:12:13.617 "traddr": "10.0.0.3", 00:12:13.617 "trsvcid": "4420" 00:12:13.617 }, 00:12:13.617 "peer_address": { 00:12:13.617 "trtype": "TCP", 00:12:13.617 "adrfam": "IPv4", 00:12:13.617 "traddr": "10.0.0.1", 00:12:13.617 "trsvcid": "54030" 00:12:13.617 }, 00:12:13.617 "auth": { 00:12:13.617 "state": "completed", 00:12:13.617 "digest": "sha512", 00:12:13.617 "dhgroup": "ffdhe6144" 00:12:13.617 } 00:12:13.617 } 00:12:13.617 ]' 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.617 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.875 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:12:13.875 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:14.823 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.082 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.649 00:12:15.649 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.649 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.649 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.908 { 00:12:15.908 "cntlid": 133, 00:12:15.908 "qid": 0, 00:12:15.908 "state": "enabled", 00:12:15.908 "thread": "nvmf_tgt_poll_group_000", 00:12:15.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:15.908 "listen_address": { 00:12:15.908 "trtype": "TCP", 00:12:15.908 "adrfam": "IPv4", 00:12:15.908 "traddr": "10.0.0.3", 00:12:15.908 "trsvcid": "4420" 00:12:15.908 }, 00:12:15.908 "peer_address": { 00:12:15.908 "trtype": "TCP", 00:12:15.908 "adrfam": "IPv4", 00:12:15.908 "traddr": "10.0.0.1", 00:12:15.908 "trsvcid": "54060" 00:12:15.908 }, 00:12:15.908 "auth": { 00:12:15.908 "state": "completed", 00:12:15.908 "digest": "sha512", 00:12:15.908 "dhgroup": "ffdhe6144" 00:12:15.908 } 00:12:15.908 } 00:12:15.908 ]' 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:15.908 10:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.908 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.908 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.908 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.166 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:12:16.166 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.101 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.360 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.360 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:17.360 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.360 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.630 00:12:17.630 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.630 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.630 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.891 { 00:12:17.891 "cntlid": 135, 00:12:17.891 "qid": 0, 00:12:17.891 "state": "enabled", 00:12:17.891 "thread": "nvmf_tgt_poll_group_000", 00:12:17.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:17.891 "listen_address": { 00:12:17.891 "trtype": "TCP", 00:12:17.891 "adrfam": "IPv4", 00:12:17.891 "traddr": "10.0.0.3", 00:12:17.891 "trsvcid": "4420" 00:12:17.891 }, 00:12:17.891 "peer_address": { 00:12:17.891 "trtype": "TCP", 00:12:17.891 "adrfam": "IPv4", 00:12:17.891 "traddr": "10.0.0.1", 00:12:17.891 "trsvcid": "54096" 00:12:17.891 }, 00:12:17.891 "auth": { 00:12:17.891 "state": "completed", 00:12:17.891 "digest": "sha512", 00:12:17.891 "dhgroup": "ffdhe6144" 00:12:17.891 } 00:12:17.891 } 00:12:17.891 ]' 00:12:17.891 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.150 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.408 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:18.408 10:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.343 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.279 00:12:20.279 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.279 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.279 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.537 { 00:12:20.537 "cntlid": 137, 00:12:20.537 "qid": 0, 00:12:20.537 "state": "enabled", 00:12:20.537 "thread": "nvmf_tgt_poll_group_000", 00:12:20.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:20.537 "listen_address": { 00:12:20.537 "trtype": "TCP", 00:12:20.537 "adrfam": "IPv4", 00:12:20.537 "traddr": "10.0.0.3", 00:12:20.537 "trsvcid": "4420" 00:12:20.537 }, 00:12:20.537 "peer_address": { 00:12:20.537 "trtype": "TCP", 00:12:20.537 "adrfam": "IPv4", 00:12:20.537 "traddr": "10.0.0.1", 00:12:20.537 "trsvcid": "54120" 00:12:20.537 }, 00:12:20.537 "auth": { 00:12:20.537 "state": "completed", 00:12:20.537 "digest": "sha512", 00:12:20.537 "dhgroup": "ffdhe8192" 00:12:20.537 } 00:12:20.537 } 00:12:20.537 ]' 00:12:20.537 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.538 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.106 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:21.106 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.673 10:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.931 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.866 00:12:22.866 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.866 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.866 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.866 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.866 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.866 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.866 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.125 { 00:12:23.125 "cntlid": 139, 00:12:23.125 "qid": 0, 00:12:23.125 "state": "enabled", 00:12:23.125 "thread": "nvmf_tgt_poll_group_000", 00:12:23.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:23.125 "listen_address": { 00:12:23.125 "trtype": "TCP", 00:12:23.125 "adrfam": "IPv4", 00:12:23.125 "traddr": "10.0.0.3", 00:12:23.125 "trsvcid": "4420" 00:12:23.125 }, 00:12:23.125 "peer_address": { 00:12:23.125 "trtype": "TCP", 00:12:23.125 "adrfam": "IPv4", 00:12:23.125 "traddr": "10.0.0.1", 00:12:23.125 "trsvcid": "54140" 00:12:23.125 }, 00:12:23.125 "auth": { 00:12:23.125 "state": "completed", 00:12:23.125 "digest": "sha512", 00:12:23.125 "dhgroup": "ffdhe8192" 00:12:23.125 } 00:12:23.125 } 00:12:23.125 ]' 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.125 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.692 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:12:23.692 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: --dhchap-ctrl-secret DHHC-1:02:MzUzNGQ1MGVhNWFkYjI3NDI4OTc1N2U3NDE5Yjk1M2ZjMjAwM2E5MjhhMWRkYWJmBEoIqw==: 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.259 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.518 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.454 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.454 { 00:12:25.454 "cntlid": 141, 00:12:25.454 "qid": 0, 00:12:25.454 "state": "enabled", 00:12:25.454 "thread": "nvmf_tgt_poll_group_000", 00:12:25.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:25.454 "listen_address": { 00:12:25.454 "trtype": "TCP", 00:12:25.454 "adrfam": "IPv4", 00:12:25.454 "traddr": "10.0.0.3", 00:12:25.454 "trsvcid": "4420" 00:12:25.454 }, 00:12:25.454 "peer_address": { 00:12:25.454 "trtype": "TCP", 00:12:25.454 "adrfam": "IPv4", 00:12:25.454 "traddr": "10.0.0.1", 00:12:25.454 "trsvcid": "53066" 00:12:25.454 }, 00:12:25.454 "auth": { 00:12:25.454 "state": "completed", 00:12:25.454 "digest": "sha512", 00:12:25.454 "dhgroup": "ffdhe8192" 00:12:25.454 } 00:12:25.454 } 00:12:25.454 ]' 00:12:25.454 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.713 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.971 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:12:25.971 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:01:YzIzM2U2NjM1NmNhMzhlMWZmOGJkMDA4NTY1ZjUxYjctUMgR: 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:26.906 10:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.906 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.165 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.165 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:27.165 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.165 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.732 00:12:27.732 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.732 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.732 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.990 { 00:12:27.990 "cntlid": 143, 00:12:27.990 "qid": 0, 00:12:27.990 "state": "enabled", 00:12:27.990 "thread": "nvmf_tgt_poll_group_000", 00:12:27.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:27.990 "listen_address": { 00:12:27.990 "trtype": "TCP", 00:12:27.990 "adrfam": "IPv4", 00:12:27.990 "traddr": "10.0.0.3", 00:12:27.990 "trsvcid": "4420" 00:12:27.990 }, 00:12:27.990 "peer_address": { 00:12:27.990 "trtype": "TCP", 00:12:27.990 "adrfam": "IPv4", 00:12:27.990 "traddr": "10.0.0.1", 00:12:27.990 "trsvcid": "53088" 00:12:27.990 }, 00:12:27.990 "auth": { 00:12:27.990 "state": "completed", 00:12:27.990 "digest": "sha512", 00:12:27.990 "dhgroup": "ffdhe8192" 00:12:27.990 } 00:12:27.990 } 00:12:27.990 ]' 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:27.990 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.249 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.249 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.249 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.507 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:28.508 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:29.074 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.332 10:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.269 00:12:30.269 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.269 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.269 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.528 { 00:12:30.528 "cntlid": 145, 00:12:30.528 "qid": 0, 00:12:30.528 "state": "enabled", 00:12:30.528 "thread": "nvmf_tgt_poll_group_000", 00:12:30.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:30.528 "listen_address": { 00:12:30.528 "trtype": "TCP", 00:12:30.528 "adrfam": "IPv4", 00:12:30.528 "traddr": "10.0.0.3", 00:12:30.528 "trsvcid": "4420" 00:12:30.528 }, 00:12:30.528 "peer_address": { 00:12:30.528 "trtype": "TCP", 00:12:30.528 "adrfam": "IPv4", 00:12:30.528 "traddr": "10.0.0.1", 00:12:30.528 "trsvcid": "53114" 00:12:30.528 }, 00:12:30.528 "auth": { 00:12:30.528 "state": "completed", 00:12:30.528 "digest": "sha512", 00:12:30.528 "dhgroup": "ffdhe8192" 00:12:30.528 } 00:12:30.528 } 00:12:30.528 ]' 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.528 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:30.786 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:00:YzBkZjJmYjg3MzEyMzU2MjRmZjI0MTNmOTQwODBlM2YwODJjZjQxYTRlY2Y0Zjg02KcCBw==: --dhchap-ctrl-secret DHHC-1:03:OGRiN2U2ODExNTQ2MTIzMGNmYWQ5YzMyOTgxOThkZWEyZmQyMGU5ZGEwYmI1ZTY0YjcxYThjZDNlOTZhYjlhZWbOVzI=: 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:31.721 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:32.288 request: 00:12:32.288 { 00:12:32.288 "name": "nvme0", 00:12:32.288 "trtype": "tcp", 00:12:32.288 "traddr": "10.0.0.3", 00:12:32.288 "adrfam": "ipv4", 00:12:32.288 "trsvcid": "4420", 00:12:32.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:32.288 "prchk_reftag": false, 00:12:32.288 "prchk_guard": false, 00:12:32.288 "hdgst": false, 00:12:32.288 "ddgst": false, 00:12:32.288 "dhchap_key": "key2", 00:12:32.288 "allow_unrecognized_csi": false, 00:12:32.288 "method": "bdev_nvme_attach_controller", 00:12:32.288 "req_id": 1 00:12:32.288 } 00:12:32.288 Got JSON-RPC error response 00:12:32.288 response: 00:12:32.288 { 00:12:32.288 "code": -5, 00:12:32.288 "message": "Input/output error" 00:12:32.288 } 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.288 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:32.855 request: 00:12:32.855 { 00:12:32.855 "name": "nvme0", 00:12:32.855 "trtype": "tcp", 00:12:32.855 "traddr": "10.0.0.3", 00:12:32.855 "adrfam": "ipv4", 00:12:32.855 "trsvcid": "4420", 00:12:32.855 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:32.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:32.855 "prchk_reftag": false, 00:12:32.855 "prchk_guard": false, 00:12:32.855 "hdgst": false, 00:12:32.855 "ddgst": false, 00:12:32.855 "dhchap_key": "key1", 00:12:32.855 "dhchap_ctrlr_key": "ckey2", 00:12:32.855 "allow_unrecognized_csi": false, 00:12:32.855 "method": "bdev_nvme_attach_controller", 00:12:32.855 "req_id": 1 00:12:32.855 } 00:12:32.855 Got JSON-RPC error response 00:12:32.855 response: 00:12:32.855 { 00:12:32.855 "code": -5, 00:12:32.855 "message": "Input/output error" 00:12:32.855 } 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.855 10:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.421 request: 00:12:33.421 { 00:12:33.421 "name": "nvme0", 00:12:33.421 "trtype": "tcp", 00:12:33.421 "traddr": "10.0.0.3", 00:12:33.421 "adrfam": "ipv4", 00:12:33.421 "trsvcid": "4420", 00:12:33.421 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:33.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:33.421 "prchk_reftag": false, 00:12:33.421 "prchk_guard": false, 00:12:33.421 "hdgst": false, 00:12:33.421 "ddgst": false, 00:12:33.421 "dhchap_key": "key1", 00:12:33.421 "dhchap_ctrlr_key": "ckey1", 00:12:33.421 "allow_unrecognized_csi": false, 00:12:33.421 "method": "bdev_nvme_attach_controller", 00:12:33.421 "req_id": 1 00:12:33.421 } 00:12:33.421 Got JSON-RPC error response 00:12:33.421 response: 00:12:33.421 { 00:12:33.421 "code": -5, 00:12:33.421 "message": "Input/output error" 00:12:33.421 } 00:12:33.421 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:33.421 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:33.421 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:33.421 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:33.421 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:33.421 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67203 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67203 ']' 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67203 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67203 00:12:33.422 killing process with pid 67203 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67203' 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67203 00:12:33.422 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67203 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70306 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70306 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70306 ']' 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:33.680 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.938 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:33.938 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:33.938 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.938 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.938 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.197 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.197 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:34.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.197 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70306 00:12:34.197 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70306 ']' 00:12:34.198 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.198 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:34.198 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.198 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:34.198 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 null0 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jGA 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PLq ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PLq 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PeW 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1Zh ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1Zh 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9cm 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.tMq ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tMq 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.709 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.457 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.716 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.651 nvme0n1 00:12:35.651 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.651 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.651 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.908 { 00:12:35.908 "cntlid": 1, 00:12:35.908 "qid": 0, 00:12:35.908 "state": "enabled", 00:12:35.908 "thread": "nvmf_tgt_poll_group_000", 00:12:35.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:35.908 "listen_address": { 00:12:35.908 "trtype": "TCP", 00:12:35.908 "adrfam": "IPv4", 00:12:35.908 "traddr": "10.0.0.3", 00:12:35.908 "trsvcid": "4420" 00:12:35.908 }, 00:12:35.908 "peer_address": { 00:12:35.908 "trtype": "TCP", 00:12:35.908 "adrfam": "IPv4", 00:12:35.908 "traddr": "10.0.0.1", 00:12:35.908 "trsvcid": "42246" 00:12:35.908 }, 00:12:35.908 "auth": { 00:12:35.908 "state": "completed", 00:12:35.908 "digest": "sha512", 00:12:35.908 "dhgroup": "ffdhe8192" 00:12:35.908 } 00:12:35.908 } 00:12:35.908 ]' 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.908 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.908 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:35.908 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.167 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.167 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.167 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.426 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:36.426 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key3 00:12:36.993 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.994 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.994 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.994 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:36.994 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.561 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.561 request: 00:12:37.561 { 00:12:37.561 "name": "nvme0", 00:12:37.561 "trtype": "tcp", 00:12:37.561 "traddr": "10.0.0.3", 00:12:37.561 "adrfam": "ipv4", 00:12:37.561 "trsvcid": "4420", 00:12:37.561 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:37.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:37.561 "prchk_reftag": false, 00:12:37.561 "prchk_guard": false, 00:12:37.561 "hdgst": false, 00:12:37.561 "ddgst": false, 00:12:37.561 "dhchap_key": "key3", 00:12:37.562 "allow_unrecognized_csi": false, 00:12:37.562 "method": "bdev_nvme_attach_controller", 00:12:37.562 "req_id": 1 00:12:37.562 } 00:12:37.562 Got JSON-RPC error response 00:12:37.562 response: 00:12:37.562 { 00:12:37.562 "code": -5, 00:12:37.562 "message": "Input/output error" 00:12:37.562 } 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:37.562 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.150 request: 00:12:38.150 { 00:12:38.150 "name": "nvme0", 00:12:38.150 "trtype": "tcp", 00:12:38.150 "traddr": "10.0.0.3", 00:12:38.150 "adrfam": "ipv4", 00:12:38.150 "trsvcid": "4420", 00:12:38.150 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:38.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:38.150 "prchk_reftag": false, 00:12:38.150 "prchk_guard": false, 00:12:38.150 "hdgst": false, 00:12:38.150 "ddgst": false, 00:12:38.150 "dhchap_key": "key3", 00:12:38.150 "allow_unrecognized_csi": false, 00:12:38.150 "method": "bdev_nvme_attach_controller", 00:12:38.150 "req_id": 1 00:12:38.150 } 00:12:38.150 Got JSON-RPC error response 00:12:38.150 response: 00:12:38.150 { 00:12:38.150 "code": -5, 00:12:38.150 "message": "Input/output error" 00:12:38.150 } 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:38.150 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:38.717 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:38.976 request: 00:12:38.976 { 00:12:38.976 "name": "nvme0", 00:12:38.976 "trtype": "tcp", 00:12:38.976 "traddr": "10.0.0.3", 00:12:38.976 "adrfam": "ipv4", 00:12:38.976 "trsvcid": "4420", 00:12:38.976 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:38.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:38.976 "prchk_reftag": false, 00:12:38.976 "prchk_guard": false, 00:12:38.976 "hdgst": false, 00:12:38.976 "ddgst": false, 00:12:38.976 "dhchap_key": "key0", 00:12:38.976 "dhchap_ctrlr_key": "key1", 00:12:38.976 "allow_unrecognized_csi": false, 00:12:38.976 "method": "bdev_nvme_attach_controller", 00:12:38.976 "req_id": 1 00:12:38.976 } 00:12:38.976 Got JSON-RPC error response 00:12:38.976 response: 00:12:38.976 { 00:12:38.976 "code": -5, 00:12:38.976 "message": "Input/output error" 00:12:38.976 } 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:38.976 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:39.235 nvme0n1 00:12:39.493 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:39.493 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:39.493 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.751 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.751 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.751 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:40.010 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:40.945 nvme0n1 00:12:40.945 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:40.945 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:40.945 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:41.204 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.463 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.463 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:41.463 10:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid 89901d6b-8f02-4106-8c0e-f8e118ca6735 -l 0 --dhchap-secret DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: --dhchap-ctrl-secret DHHC-1:03:OTc3N2Q4MzZiYjcyMzgyMGM0N2MzZjNiMmI3ZmYxMGY1NTAzMGIzOWYxNTBiNzU1NjE0ZTE2MGI5MTE1ZTJkMHqUAdM=: 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.030 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:42.599 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:42.600 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:42.600 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:43.165 request: 00:12:43.165 { 00:12:43.165 "name": "nvme0", 00:12:43.165 "trtype": "tcp", 00:12:43.165 "traddr": "10.0.0.3", 00:12:43.165 "adrfam": "ipv4", 00:12:43.165 "trsvcid": "4420", 00:12:43.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:43.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735", 00:12:43.165 "prchk_reftag": false, 00:12:43.165 "prchk_guard": false, 00:12:43.165 "hdgst": false, 00:12:43.165 "ddgst": false, 00:12:43.165 "dhchap_key": "key1", 00:12:43.165 "allow_unrecognized_csi": false, 00:12:43.165 "method": "bdev_nvme_attach_controller", 00:12:43.165 "req_id": 1 00:12:43.165 } 00:12:43.165 Got JSON-RPC error response 00:12:43.165 response: 00:12:43.165 { 00:12:43.165 "code": -5, 00:12:43.165 "message": "Input/output error" 00:12:43.165 } 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:43.165 10:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:44.102 nvme0n1 00:12:44.102 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:44.102 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.102 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:44.361 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.361 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.361 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:44.621 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:44.879 nvme0n1 00:12:44.879 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:44.879 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:44.879 10:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.138 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.138 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.138 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: '' 2s 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: ]] 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YWJlN2FlOTY4YjA3MzA2YThhM2UxMjE3ZjEyMGI5ZDhkCCVT: 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:45.398 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: 2s 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: ]] 00:12:47.938 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2ZlYzIwZTdlNDY2Mzg4YTYwYzlhOGFhOGVkZDdiOWU4YjUxY2UyNjU0M2JjMjU56rsWpA==: 00:12:47.939 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:47.939 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:49.844 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:50.780 nvme0n1 00:12:50.780 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.780 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.780 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.780 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.780 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:50.780 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:51.348 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:51.348 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:51.348 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.606 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.606 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:51.607 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.607 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.607 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.607 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:51.607 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:51.865 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:51.865 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.865 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:52.123 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.123 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:52.123 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.123 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.123 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:52.124 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:52.691 request: 00:12:52.691 { 00:12:52.691 "name": "nvme0", 00:12:52.691 "dhchap_key": "key1", 00:12:52.691 "dhchap_ctrlr_key": "key3", 00:12:52.691 "method": "bdev_nvme_set_keys", 00:12:52.691 "req_id": 1 00:12:52.691 } 00:12:52.691 Got JSON-RPC error response 00:12:52.691 response: 00:12:52.691 { 00:12:52.691 "code": -13, 00:12:52.691 "message": "Permission denied" 00:12:52.691 } 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:52.691 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.950 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:52.950 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:54.327 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:55.263 nvme0n1 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:55.263 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:56.212 request: 00:12:56.212 { 00:12:56.212 "name": "nvme0", 00:12:56.213 "dhchap_key": "key2", 00:12:56.213 "dhchap_ctrlr_key": "key0", 00:12:56.213 "method": "bdev_nvme_set_keys", 00:12:56.213 "req_id": 1 00:12:56.213 } 00:12:56.213 Got JSON-RPC error response 00:12:56.213 response: 00:12:56.213 { 00:12:56.213 "code": -13, 00:12:56.213 "message": "Permission denied" 00:12:56.213 } 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:56.213 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67233 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 67233 ']' 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 67233 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67233 00:12:57.595 killing process with pid 67233 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67233' 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 67233 00:12:57.595 10:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 67233 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.161 rmmod nvme_tcp 00:12:58.161 rmmod nvme_fabrics 00:12:58.161 rmmod nvme_keyring 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70306 ']' 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70306 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70306 ']' 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70306 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70306 00:12:58.161 killing process with pid 70306 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70306' 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70306 00:12:58.161 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70306 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.419 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:58.420 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jGA /tmp/spdk.key-sha256.PeW /tmp/spdk.key-sha384.9cm /tmp/spdk.key-sha512.709 /tmp/spdk.key-sha512.PLq /tmp/spdk.key-sha384.1Zh /tmp/spdk.key-sha256.tMq '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:58.678 00:12:58.678 real 3m13.646s 00:12:58.678 user 7m42.545s 00:12:58.678 sys 0m30.231s 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:58.678 ************************************ 00:12:58.678 END TEST nvmf_auth_target 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.678 ************************************ 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.678 ************************************ 00:12:58.678 START TEST nvmf_bdevio_no_huge 00:12:58.678 ************************************ 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:58.678 * Looking for test storage... 00:12:58.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:12:58.678 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:58.937 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:58.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.938 --rc genhtml_branch_coverage=1 00:12:58.938 --rc genhtml_function_coverage=1 00:12:58.938 --rc genhtml_legend=1 00:12:58.938 --rc geninfo_all_blocks=1 00:12:58.938 --rc geninfo_unexecuted_blocks=1 00:12:58.938 00:12:58.938 ' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:58.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.938 --rc genhtml_branch_coverage=1 00:12:58.938 --rc genhtml_function_coverage=1 00:12:58.938 --rc genhtml_legend=1 00:12:58.938 --rc geninfo_all_blocks=1 00:12:58.938 --rc geninfo_unexecuted_blocks=1 00:12:58.938 00:12:58.938 ' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:58.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.938 --rc genhtml_branch_coverage=1 00:12:58.938 --rc genhtml_function_coverage=1 00:12:58.938 --rc genhtml_legend=1 00:12:58.938 --rc geninfo_all_blocks=1 00:12:58.938 --rc geninfo_unexecuted_blocks=1 00:12:58.938 00:12:58.938 ' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:58.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.938 --rc genhtml_branch_coverage=1 00:12:58.938 --rc genhtml_function_coverage=1 00:12:58.938 --rc genhtml_legend=1 00:12:58.938 --rc geninfo_all_blocks=1 00:12:58.938 --rc geninfo_unexecuted_blocks=1 00:12:58.938 00:12:58.938 ' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.938 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.938 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.939 Cannot find device "nvmf_init_br" 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.939 Cannot find device "nvmf_init_br2" 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.939 Cannot find device "nvmf_tgt_br" 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:58.939 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.939 Cannot find device "nvmf_tgt_br2" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.939 Cannot find device "nvmf_init_br" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.939 Cannot find device "nvmf_init_br2" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.939 Cannot find device "nvmf_tgt_br" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.939 Cannot find device "nvmf_tgt_br2" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.939 Cannot find device "nvmf_br" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.939 Cannot find device "nvmf_init_if" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.939 Cannot find device "nvmf_init_if2" 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.939 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.939 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:59.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:12:59.198 00:12:59.198 --- 10.0.0.3 ping statistics --- 00:12:59.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.198 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:59.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:59.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:12:59.198 00:12:59.198 --- 10.0.0.4 ping statistics --- 00:12:59.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.198 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:59.198 00:12:59.198 --- 10.0.0.1 ping statistics --- 00:12:59.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.198 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:59.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:12:59.198 00:12:59.198 --- 10.0.0.2 ping statistics --- 00:12:59.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.198 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70953 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70953 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 70953 ']' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:59.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:59.198 10:02:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:59.457 [2024-11-04 10:02:31.385633] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:12:59.457 [2024-11-04 10:02:31.386362] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:59.457 [2024-11-04 10:02:31.555238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.716 [2024-11-04 10:02:31.642246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.716 [2024-11-04 10:02:31.642319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.716 [2024-11-04 10:02:31.642334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.716 [2024-11-04 10:02:31.642345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.716 [2024-11-04 10:02:31.642354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.716 [2024-11-04 10:02:31.643038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:59.716 [2024-11-04 10:02:31.643202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:59.716 [2024-11-04 10:02:31.643267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:59.716 [2024-11-04 10:02:31.643270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.716 [2024-11-04 10:02:31.649472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.283 [2024-11-04 10:02:32.429258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.283 Malloc0 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.283 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:00.542 [2024-11-04 10:02:32.469537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:00.542 { 00:13:00.542 "params": { 00:13:00.542 "name": "Nvme$subsystem", 00:13:00.542 "trtype": "$TEST_TRANSPORT", 00:13:00.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.542 "adrfam": "ipv4", 00:13:00.542 "trsvcid": "$NVMF_PORT", 00:13:00.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.542 "hdgst": ${hdgst:-false}, 00:13:00.542 "ddgst": ${ddgst:-false} 00:13:00.542 }, 00:13:00.542 "method": "bdev_nvme_attach_controller" 00:13:00.542 } 00:13:00.542 EOF 00:13:00.542 )") 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:00.542 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:00.543 10:02:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:00.543 "params": { 00:13:00.543 "name": "Nvme1", 00:13:00.543 "trtype": "tcp", 00:13:00.543 "traddr": "10.0.0.3", 00:13:00.543 "adrfam": "ipv4", 00:13:00.543 "trsvcid": "4420", 00:13:00.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.543 "hdgst": false, 00:13:00.543 "ddgst": false 00:13:00.543 }, 00:13:00.543 "method": "bdev_nvme_attach_controller" 00:13:00.543 }' 00:13:00.543 [2024-11-04 10:02:32.531784] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:00.543 [2024-11-04 10:02:32.531877] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70989 ] 00:13:00.801 [2024-11-04 10:02:32.719969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.801 [2024-11-04 10:02:32.818895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.801 [2024-11-04 10:02:32.819014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.801 [2024-11-04 10:02:32.819022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.801 [2024-11-04 10:02:32.833014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.060 I/O targets: 00:13:01.060 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:01.060 00:13:01.060 00:13:01.060 CUnit - A unit testing framework for C - Version 2.1-3 00:13:01.060 http://cunit.sourceforge.net/ 00:13:01.060 00:13:01.060 00:13:01.060 Suite: bdevio tests on: Nvme1n1 00:13:01.060 Test: blockdev write read block ...passed 00:13:01.060 Test: blockdev write zeroes read block ...passed 00:13:01.060 Test: blockdev write zeroes read no split ...passed 00:13:01.060 Test: blockdev write zeroes read split ...passed 00:13:01.060 Test: blockdev write zeroes read split partial ...passed 00:13:01.060 Test: blockdev reset ...[2024-11-04 10:02:33.083573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:01.060 [2024-11-04 10:02:33.083939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230d310 (9): Bad file descriptor 00:13:01.060 [2024-11-04 10:02:33.101955] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:01.060 passed 00:13:01.060 Test: blockdev write read 8 blocks ...passed 00:13:01.060 Test: blockdev write read size > 128k ...passed 00:13:01.060 Test: blockdev write read invalid size ...passed 00:13:01.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:01.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:01.060 Test: blockdev write read max offset ...passed 00:13:01.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:01.060 Test: blockdev writev readv 8 blocks ...passed 00:13:01.060 Test: blockdev writev readv 30 x 1block ...passed 00:13:01.060 Test: blockdev writev readv block ...passed 00:13:01.060 Test: blockdev writev readv size > 128k ...passed 00:13:01.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:01.060 Test: blockdev comparev and writev ...[2024-11-04 10:02:33.112343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.112404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.112443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.112455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.112755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.112775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.112792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.112803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.113239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.113275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.113294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.113305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.113584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.113621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.113639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:01.060 [2024-11-04 10:02:33.113649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:01.060 passed 00:13:01.060 Test: blockdev nvme passthru rw ...passed 00:13:01.060 Test: blockdev nvme passthru vendor specific ...passed 00:13:01.060 Test: blockdev nvme admin passthru ...[2024-11-04 10:02:33.114795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.060 [2024-11-04 10:02:33.114912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.115119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.060 [2024-11-04 10:02:33.115139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.115247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.060 [2024-11-04 10:02:33.115263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:01.060 [2024-11-04 10:02:33.115374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:01.060 [2024-11-04 10:02:33.115390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:01.060 passed 00:13:01.060 Test: blockdev copy ...passed 00:13:01.060 00:13:01.060 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.060 suites 1 1 n/a 0 0 00:13:01.060 tests 23 23 23 0 0 00:13:01.060 asserts 152 152 152 0 n/a 00:13:01.060 00:13:01.060 Elapsed time = 0.175 seconds 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.319 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.578 rmmod nvme_tcp 00:13:01.578 rmmod nvme_fabrics 00:13:01.578 rmmod nvme_keyring 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70953 ']' 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70953 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 70953 ']' 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 70953 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70953 00:13:01.578 killing process with pid 70953 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70953' 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 70953 00:13:01.578 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 70953 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:02.146 00:13:02.146 real 0m3.546s 00:13:02.146 user 0m10.890s 00:13:02.146 sys 0m1.471s 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:02.146 ************************************ 00:13:02.146 END TEST nvmf_bdevio_no_huge 00:13:02.146 ************************************ 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:02.146 10:02:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.406 ************************************ 00:13:02.406 START TEST nvmf_tls 00:13:02.406 ************************************ 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:02.406 * Looking for test storage... 00:13:02.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.406 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.407 --rc genhtml_branch_coverage=1 00:13:02.407 --rc genhtml_function_coverage=1 00:13:02.407 --rc genhtml_legend=1 00:13:02.407 --rc geninfo_all_blocks=1 00:13:02.407 --rc geninfo_unexecuted_blocks=1 00:13:02.407 00:13:02.407 ' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.407 --rc genhtml_branch_coverage=1 00:13:02.407 --rc genhtml_function_coverage=1 00:13:02.407 --rc genhtml_legend=1 00:13:02.407 --rc geninfo_all_blocks=1 00:13:02.407 --rc geninfo_unexecuted_blocks=1 00:13:02.407 00:13:02.407 ' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.407 --rc genhtml_branch_coverage=1 00:13:02.407 --rc genhtml_function_coverage=1 00:13:02.407 --rc genhtml_legend=1 00:13:02.407 --rc geninfo_all_blocks=1 00:13:02.407 --rc geninfo_unexecuted_blocks=1 00:13:02.407 00:13:02.407 ' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:02.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.407 --rc genhtml_branch_coverage=1 00:13:02.407 --rc genhtml_function_coverage=1 00:13:02.407 --rc genhtml_legend=1 00:13:02.407 --rc geninfo_all_blocks=1 00:13:02.407 --rc geninfo_unexecuted_blocks=1 00:13:02.407 00:13:02.407 ' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:02.407 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.407 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:02.408 Cannot find device "nvmf_init_br" 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:02.408 Cannot find device "nvmf_init_br2" 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:02.408 Cannot find device "nvmf_tgt_br" 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:02.408 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.666 Cannot find device "nvmf_tgt_br2" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:02.666 Cannot find device "nvmf_init_br" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:02.666 Cannot find device "nvmf_init_br2" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:02.666 Cannot find device "nvmf_tgt_br" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:02.666 Cannot find device "nvmf_tgt_br2" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:02.666 Cannot find device "nvmf_br" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:02.666 Cannot find device "nvmf_init_if" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:02.666 Cannot find device "nvmf_init_if2" 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.666 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.925 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:02.926 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.926 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:02.926 00:13:02.926 --- 10.0.0.3 ping statistics --- 00:13:02.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.926 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:02.926 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:02.926 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:13:02.926 00:13:02.926 --- 10.0.0.4 ping statistics --- 00:13:02.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.926 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:02.926 00:13:02.926 --- 10.0.0.1 ping statistics --- 00:13:02.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.926 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:02.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:13:02.926 00:13:02.926 --- 10.0.0.2 ping statistics --- 00:13:02.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.926 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71225 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71225 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71225 ']' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:02.926 10:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.926 [2024-11-04 10:02:34.975201] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:02.926 [2024-11-04 10:02:34.975291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.184 [2024-11-04 10:02:35.127908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.184 [2024-11-04 10:02:35.188897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.184 [2024-11-04 10:02:35.188959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.185 [2024-11-04 10:02:35.188974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.185 [2024-11-04 10:02:35.188985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.185 [2024-11-04 10:02:35.189001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.185 [2024-11-04 10:02:35.189443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.752 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:03.752 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:03.752 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.752 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.752 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.011 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.011 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:04.011 10:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:04.269 true 00:13:04.269 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:04.269 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:04.528 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:04.528 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:04.528 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:04.787 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:04.787 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:05.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:05.046 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:05.305 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.305 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:05.564 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:05.564 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:05.564 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:05.564 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:05.822 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:05.822 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:05.823 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:06.110 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.110 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:06.368 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:06.368 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:06.368 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:06.626 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:06.626 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:06.885 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.UoQnZCm3Md 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.d6edxxLE26 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.UoQnZCm3Md 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.d6edxxLE26 00:13:07.144 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:07.404 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:07.663 [2024-11-04 10:02:39.752383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.663 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.UoQnZCm3Md 00:13:07.663 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UoQnZCm3Md 00:13:07.663 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:07.921 [2024-11-04 10:02:40.062982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.921 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:08.489 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:08.489 [2024-11-04 10:02:40.591037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:08.489 [2024-11-04 10:02:40.591248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:08.489 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:08.747 malloc0 00:13:08.747 10:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:09.006 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UoQnZCm3Md 00:13:09.264 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:09.523 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.UoQnZCm3Md 00:13:21.727 Initializing NVMe Controllers 00:13:21.727 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:21.727 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:21.727 Initialization complete. Launching workers. 00:13:21.727 ======================================================== 00:13:21.727 Latency(us) 00:13:21.727 Device Information : IOPS MiB/s Average min max 00:13:21.727 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10125.95 39.55 6321.74 1539.63 8737.47 00:13:21.727 ======================================================== 00:13:21.727 Total : 10125.95 39.55 6321.74 1539.63 8737.47 00:13:21.727 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UoQnZCm3Md 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UoQnZCm3Md 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71464 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71464 /var/tmp/bdevperf.sock 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71464 ']' 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:21.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:21.727 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.727 [2024-11-04 10:02:51.948019] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:21.727 [2024-11-04 10:02:51.948315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71464 ] 00:13:21.727 [2024-11-04 10:02:52.092081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.727 [2024-11-04 10:02:52.146127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.727 [2024-11-04 10:02:52.199092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:21.727 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.727 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:21.727 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UoQnZCm3Md 00:13:21.727 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:21.727 [2024-11-04 10:02:52.770624] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:21.727 TLSTESTn1 00:13:21.727 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:21.727 Running I/O for 10 seconds... 00:13:22.979 4384.00 IOPS, 17.12 MiB/s [2024-11-04T10:02:56.096Z] 4385.00 IOPS, 17.13 MiB/s [2024-11-04T10:02:57.030Z] 4362.33 IOPS, 17.04 MiB/s [2024-11-04T10:02:58.406Z] 4346.50 IOPS, 16.98 MiB/s [2024-11-04T10:02:58.973Z] 4362.80 IOPS, 17.04 MiB/s [2024-11-04T10:03:00.351Z] 4350.17 IOPS, 16.99 MiB/s [2024-11-04T10:03:01.286Z] 4365.00 IOPS, 17.05 MiB/s [2024-11-04T10:03:02.227Z] 4374.38 IOPS, 17.09 MiB/s [2024-11-04T10:03:03.185Z] 4375.67 IOPS, 17.09 MiB/s [2024-11-04T10:03:03.185Z] 4374.60 IOPS, 17.09 MiB/s 00:13:31.015 Latency(us) 00:13:31.015 [2024-11-04T10:03:03.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.015 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:31.015 Verification LBA range: start 0x0 length 0x2000 00:13:31.015 TLSTESTn1 : 10.02 4380.76 17.11 0.00 0.00 29166.43 5213.09 25499.46 00:13:31.015 [2024-11-04T10:03:03.185Z] =================================================================================================================== 00:13:31.015 [2024-11-04T10:03:03.185Z] Total : 4380.76 17.11 0.00 0.00 29166.43 5213.09 25499.46 00:13:31.015 { 00:13:31.015 "results": [ 00:13:31.015 { 00:13:31.015 "job": "TLSTESTn1", 00:13:31.015 "core_mask": "0x4", 00:13:31.015 "workload": "verify", 00:13:31.015 "status": "finished", 00:13:31.015 "verify_range": { 00:13:31.015 "start": 0, 00:13:31.015 "length": 8192 00:13:31.015 }, 00:13:31.015 "queue_depth": 128, 00:13:31.015 "io_size": 4096, 00:13:31.015 "runtime": 10.015165, 00:13:31.015 "iops": 4380.7565826424225, 00:13:31.015 "mibps": 17.112330400946963, 00:13:31.015 "io_failed": 0, 00:13:31.015 "io_timeout": 0, 00:13:31.015 "avg_latency_us": 29166.42530883895, 00:13:31.015 "min_latency_us": 5213.090909090909, 00:13:31.015 "max_latency_us": 25499.46181818182 00:13:31.015 } 00:13:31.015 ], 00:13:31.015 "core_count": 1 00:13:31.015 } 00:13:31.015 10:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71464 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71464 ']' 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71464 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71464 00:13:31.015 killing process with pid 71464 00:13:31.015 Received shutdown signal, test time was about 10.000000 seconds 00:13:31.015 00:13:31.015 Latency(us) 00:13:31.015 [2024-11-04T10:03:03.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.015 [2024-11-04T10:03:03.185Z] =================================================================================================================== 00:13:31.015 [2024-11-04T10:03:03.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71464' 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71464 00:13:31.015 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71464 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d6edxxLE26 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d6edxxLE26 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.d6edxxLE26 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.d6edxxLE26 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71597 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71597 /var/tmp/bdevperf.sock 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71597 ']' 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:31.274 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.274 [2024-11-04 10:03:03.291935] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:31.274 [2024-11-04 10:03:03.292333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71597 ] 00:13:31.274 [2024-11-04 10:03:03.443805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.533 [2024-11-04 10:03:03.503883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.533 [2024-11-04 10:03:03.556781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.098 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.098 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:32.098 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.d6edxxLE26 00:13:32.665 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:32.665 [2024-11-04 10:03:04.819972] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.665 [2024-11-04 10:03:04.827764] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-04 10:03:04.828416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74fb0 (107): Transport endpoint is not connected 00:13:32.665 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:32.665 [2024-11-04 10:03:04.829400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74fb0 (9): Bad file descriptor 00:13:32.665 [2024-11-04 10:03:04.830396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:32.665 [2024-11-04 10:03:04.830424] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:32.665 [2024-11-04 10:03:04.830451] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:32.665 [2024-11-04 10:03:04.830463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:32.665 request: 00:13:32.665 { 00:13:32.665 "name": "TLSTEST", 00:13:32.665 "trtype": "tcp", 00:13:32.665 "traddr": "10.0.0.3", 00:13:32.665 "adrfam": "ipv4", 00:13:32.665 "trsvcid": "4420", 00:13:32.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:32.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:32.665 "prchk_reftag": false, 00:13:32.665 "prchk_guard": false, 00:13:32.665 "hdgst": false, 00:13:32.665 "ddgst": false, 00:13:32.665 "psk": "key0", 00:13:32.665 "allow_unrecognized_csi": false, 00:13:32.665 "method": "bdev_nvme_attach_controller", 00:13:32.665 "req_id": 1 00:13:32.665 } 00:13:32.665 Got JSON-RPC error response 00:13:32.665 response: 00:13:32.665 { 00:13:32.665 "code": -5, 00:13:32.665 "message": "Input/output error" 00:13:32.665 } 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71597 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71597 ']' 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71597 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71597 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:32.924 killing process with pid 71597 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71597' 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71597 00:13:32.924 Received shutdown signal, test time was about 10.000000 seconds 00:13:32.924 00:13:32.924 Latency(us) 00:13:32.924 [2024-11-04T10:03:05.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.924 [2024-11-04T10:03:05.094Z] =================================================================================================================== 00:13:32.924 [2024-11-04T10:03:05.094Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:32.924 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71597 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UoQnZCm3Md 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UoQnZCm3Md 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:32.924 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.UoQnZCm3Md 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UoQnZCm3Md 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71625 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71625 /var/tmp/bdevperf.sock 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71625 ']' 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:32.925 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.184 [2024-11-04 10:03:05.141674] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:33.184 [2024-11-04 10:03:05.141800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71625 ] 00:13:33.184 [2024-11-04 10:03:05.285303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.184 [2024-11-04 10:03:05.338682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.443 [2024-11-04 10:03:05.392805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.443 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:33.443 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:33.443 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UoQnZCm3Md 00:13:33.702 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:33.961 [2024-11-04 10:03:05.971846] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:33.961 [2024-11-04 10:03:05.977681] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:33.961 [2024-11-04 10:03:05.977720] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:33.961 [2024-11-04 10:03:05.977784] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:33.961 [2024-11-04 10:03:05.978689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c0fb0 (107): Transport endpoint is not connected 00:13:33.961 [2024-11-04 10:03:05.979675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c0fb0 (9): Bad file descriptor 00:13:33.961 [2024-11-04 10:03:05.980672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:33.961 [2024-11-04 10:03:05.980716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:33.961 [2024-11-04 10:03:05.980728] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:33.961 [2024-11-04 10:03:05.980739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:33.961 request: 00:13:33.961 { 00:13:33.961 "name": "TLSTEST", 00:13:33.961 "trtype": "tcp", 00:13:33.961 "traddr": "10.0.0.3", 00:13:33.961 "adrfam": "ipv4", 00:13:33.961 "trsvcid": "4420", 00:13:33.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:33.961 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:33.961 "prchk_reftag": false, 00:13:33.961 "prchk_guard": false, 00:13:33.961 "hdgst": false, 00:13:33.961 "ddgst": false, 00:13:33.961 "psk": "key0", 00:13:33.961 "allow_unrecognized_csi": false, 00:13:33.961 "method": "bdev_nvme_attach_controller", 00:13:33.961 "req_id": 1 00:13:33.961 } 00:13:33.961 Got JSON-RPC error response 00:13:33.961 response: 00:13:33.961 { 00:13:33.961 "code": -5, 00:13:33.961 "message": "Input/output error" 00:13:33.961 } 00:13:33.961 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71625 00:13:33.961 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71625 ']' 00:13:33.961 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71625 00:13:33.961 10:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71625 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:33.961 killing process with pid 71625 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71625' 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71625 00:13:33.961 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.961 00:13:33.961 Latency(us) 00:13:33.961 [2024-11-04T10:03:06.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.961 [2024-11-04T10:03:06.131Z] =================================================================================================================== 00:13:33.961 [2024-11-04T10:03:06.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:33.961 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71625 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UoQnZCm3Md 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UoQnZCm3Md 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.UoQnZCm3Md 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UoQnZCm3Md 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71646 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71646 /var/tmp/bdevperf.sock 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71646 ']' 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:34.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:34.272 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.272 [2024-11-04 10:03:06.286055] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:34.272 [2024-11-04 10:03:06.286170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:13:34.552 [2024-11-04 10:03:06.428543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.552 [2024-11-04 10:03:06.494834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.552 [2024-11-04 10:03:06.555109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.118 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:35.118 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:35.118 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UoQnZCm3Md 00:13:35.377 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:35.636 [2024-11-04 10:03:07.799530] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.895 [2024-11-04 10:03:07.807978] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:35.895 [2024-11-04 10:03:07.808065] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:35.895 [2024-11-04 10:03:07.808126] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:35.895 [2024-11-04 10:03:07.808239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0ffb0 (107): Transport endpoint is not connected 00:13:35.895 [2024-11-04 10:03:07.809231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0ffb0 (9): Bad file descriptor 00:13:35.895 [2024-11-04 10:03:07.810228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:35.895 [2024-11-04 10:03:07.810272] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:35.895 [2024-11-04 10:03:07.810300] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:35.895 [2024-11-04 10:03:07.810311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:35.895 request: 00:13:35.895 { 00:13:35.895 "name": "TLSTEST", 00:13:35.895 "trtype": "tcp", 00:13:35.895 "traddr": "10.0.0.3", 00:13:35.895 "adrfam": "ipv4", 00:13:35.895 "trsvcid": "4420", 00:13:35.895 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:35.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.895 "prchk_reftag": false, 00:13:35.895 "prchk_guard": false, 00:13:35.895 "hdgst": false, 00:13:35.895 "ddgst": false, 00:13:35.895 "psk": "key0", 00:13:35.895 "allow_unrecognized_csi": false, 00:13:35.895 "method": "bdev_nvme_attach_controller", 00:13:35.895 "req_id": 1 00:13:35.895 } 00:13:35.895 Got JSON-RPC error response 00:13:35.895 response: 00:13:35.895 { 00:13:35.895 "code": -5, 00:13:35.895 "message": "Input/output error" 00:13:35.895 } 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71646 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71646 ']' 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71646 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71646 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71646' 00:13:35.895 killing process with pid 71646 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71646 00:13:35.895 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.895 00:13:35.895 Latency(us) 00:13:35.895 [2024-11-04T10:03:08.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.895 [2024-11-04T10:03:08.065Z] =================================================================================================================== 00:13:35.895 [2024-11-04T10:03:08.065Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.895 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71646 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.895 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71675 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71675 /var/tmp/bdevperf.sock 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71675 ']' 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:35.896 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.155 [2024-11-04 10:03:08.096203] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:36.155 [2024-11-04 10:03:08.096325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71675 ] 00:13:36.155 [2024-11-04 10:03:08.238116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.155 [2024-11-04 10:03:08.290136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.414 [2024-11-04 10:03:08.346068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.414 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:36.414 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:36.414 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:36.672 [2024-11-04 10:03:08.689819] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:36.672 [2024-11-04 10:03:08.689894] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:36.672 request: 00:13:36.672 { 00:13:36.672 "name": "key0", 00:13:36.672 "path": "", 00:13:36.672 "method": "keyring_file_add_key", 00:13:36.672 "req_id": 1 00:13:36.672 } 00:13:36.672 Got JSON-RPC error response 00:13:36.672 response: 00:13:36.672 { 00:13:36.672 "code": -1, 00:13:36.672 "message": "Operation not permitted" 00:13:36.672 } 00:13:36.672 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:36.931 [2024-11-04 10:03:08.937965] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:36.931 [2024-11-04 10:03:08.938086] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:36.931 request: 00:13:36.931 { 00:13:36.931 "name": "TLSTEST", 00:13:36.931 "trtype": "tcp", 00:13:36.931 "traddr": "10.0.0.3", 00:13:36.931 "adrfam": "ipv4", 00:13:36.931 "trsvcid": "4420", 00:13:36.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.931 "prchk_reftag": false, 00:13:36.931 "prchk_guard": false, 00:13:36.931 "hdgst": false, 00:13:36.931 "ddgst": false, 00:13:36.931 "psk": "key0", 00:13:36.931 "allow_unrecognized_csi": false, 00:13:36.931 "method": "bdev_nvme_attach_controller", 00:13:36.931 "req_id": 1 00:13:36.931 } 00:13:36.931 Got JSON-RPC error response 00:13:36.931 response: 00:13:36.931 { 00:13:36.931 "code": -126, 00:13:36.931 "message": "Required key not available" 00:13:36.931 } 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71675 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71675 ']' 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71675 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71675 00:13:36.931 killing process with pid 71675 00:13:36.931 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.931 00:13:36.931 Latency(us) 00:13:36.931 [2024-11-04T10:03:09.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.931 [2024-11-04T10:03:09.101Z] =================================================================================================================== 00:13:36.931 [2024-11-04T10:03:09.101Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71675' 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71675 00:13:36.931 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71675 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71225 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71225 ']' 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71225 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71225 00:13:37.191 killing process with pid 71225 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71225' 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71225 00:13:37.191 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71225 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.DB2DddFYtE 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.DB2DddFYtE 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71712 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71712 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71712 ']' 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:37.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:37.451 10:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.451 [2024-11-04 10:03:09.515366] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:37.451 [2024-11-04 10:03:09.515440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.709 [2024-11-04 10:03:09.662068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.709 [2024-11-04 10:03:09.721167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.709 [2024-11-04 10:03:09.721234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.709 [2024-11-04 10:03:09.721260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.709 [2024-11-04 10:03:09.721268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.709 [2024-11-04 10:03:09.721274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.709 [2024-11-04 10:03:09.721704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.709 [2024-11-04 10:03:09.777829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.DB2DddFYtE 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DB2DddFYtE 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:38.661 [2024-11-04 10:03:10.773600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.661 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:38.929 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:39.188 [2024-11-04 10:03:11.309725] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:39.188 [2024-11-04 10:03:11.309973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:39.188 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:39.447 malloc0 00:13:39.447 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:39.706 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:13:39.965 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DB2DddFYtE 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DB2DddFYtE 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71767 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71767 /var/tmp/bdevperf.sock 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71767 ']' 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:40.224 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.224 [2024-11-04 10:03:12.338057] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:40.224 [2024-11-04 10:03:12.338197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:13:40.483 [2024-11-04 10:03:12.489870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.483 [2024-11-04 10:03:12.557937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.483 [2024-11-04 10:03:12.616110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.420 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:41.420 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:41.420 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:13:41.420 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:41.679 [2024-11-04 10:03:13.742646] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.679 TLSTESTn1 00:13:41.679 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:41.938 Running I/O for 10 seconds... 00:13:43.811 4143.00 IOPS, 16.18 MiB/s [2024-11-04T10:03:17.358Z] 4224.00 IOPS, 16.50 MiB/s [2024-11-04T10:03:18.293Z] 4232.33 IOPS, 16.53 MiB/s [2024-11-04T10:03:19.229Z] 4250.50 IOPS, 16.60 MiB/s [2024-11-04T10:03:20.164Z] 4228.00 IOPS, 16.52 MiB/s [2024-11-04T10:03:21.098Z] 4208.33 IOPS, 16.44 MiB/s [2024-11-04T10:03:22.031Z] 4205.14 IOPS, 16.43 MiB/s [2024-11-04T10:03:22.966Z] 4208.38 IOPS, 16.44 MiB/s [2024-11-04T10:03:24.341Z] 4219.11 IOPS, 16.48 MiB/s [2024-11-04T10:03:24.341Z] 4223.10 IOPS, 16.50 MiB/s 00:13:52.171 Latency(us) 00:13:52.171 [2024-11-04T10:03:24.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.171 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:52.171 Verification LBA range: start 0x0 length 0x2000 00:13:52.171 TLSTESTn1 : 10.01 4229.49 16.52 0.00 0.00 30210.53 4825.83 28716.68 00:13:52.171 [2024-11-04T10:03:24.341Z] =================================================================================================================== 00:13:52.171 [2024-11-04T10:03:24.341Z] Total : 4229.49 16.52 0.00 0.00 30210.53 4825.83 28716.68 00:13:52.171 { 00:13:52.171 "results": [ 00:13:52.171 { 00:13:52.171 "job": "TLSTESTn1", 00:13:52.171 "core_mask": "0x4", 00:13:52.171 "workload": "verify", 00:13:52.171 "status": "finished", 00:13:52.171 "verify_range": { 00:13:52.171 "start": 0, 00:13:52.171 "length": 8192 00:13:52.171 }, 00:13:52.171 "queue_depth": 128, 00:13:52.171 "io_size": 4096, 00:13:52.171 "runtime": 10.01493, 00:13:52.171 "iops": 4229.485378330153, 00:13:52.171 "mibps": 16.52142725910216, 00:13:52.171 "io_failed": 0, 00:13:52.171 "io_timeout": 0, 00:13:52.171 "avg_latency_us": 30210.5325614996, 00:13:52.171 "min_latency_us": 4825.832727272727, 00:13:52.171 "max_latency_us": 28716.683636363636 00:13:52.171 } 00:13:52.171 ], 00:13:52.171 "core_count": 1 00:13:52.171 } 00:13:52.171 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:52.171 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71767 00:13:52.171 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71767 ']' 00:13:52.171 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71767 00:13:52.172 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:52.172 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.172 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71767 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:52.172 killing process with pid 71767 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71767' 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71767 00:13:52.172 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.172 00:13:52.172 Latency(us) 00:13:52.172 [2024-11-04T10:03:24.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.172 [2024-11-04T10:03:24.342Z] =================================================================================================================== 00:13:52.172 [2024-11-04T10:03:24.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71767 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.DB2DddFYtE 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DB2DddFYtE 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DB2DddFYtE 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DB2DddFYtE 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DB2DddFYtE 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71903 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71903 /var/tmp/bdevperf.sock 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71903 ']' 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.172 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.172 [2024-11-04 10:03:24.254156] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:52.172 [2024-11-04 10:03:24.254259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71903 ] 00:13:52.430 [2024-11-04 10:03:24.403451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.430 [2024-11-04 10:03:24.458364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.430 [2024-11-04 10:03:24.513206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.430 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.430 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:52.430 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:13:52.688 [2024-11-04 10:03:24.843843] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DB2DddFYtE': 0100666 00:13:52.688 [2024-11-04 10:03:24.843882] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:52.688 request: 00:13:52.688 { 00:13:52.688 "name": "key0", 00:13:52.688 "path": "/tmp/tmp.DB2DddFYtE", 00:13:52.688 "method": "keyring_file_add_key", 00:13:52.688 "req_id": 1 00:13:52.688 } 00:13:52.688 Got JSON-RPC error response 00:13:52.688 response: 00:13:52.688 { 00:13:52.688 "code": -1, 00:13:52.688 "message": "Operation not permitted" 00:13:52.688 } 00:13:52.946 10:03:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:52.946 [2024-11-04 10:03:25.092016] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:52.946 [2024-11-04 10:03:25.092080] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:52.946 request: 00:13:52.946 { 00:13:52.946 "name": "TLSTEST", 00:13:52.946 "trtype": "tcp", 00:13:52.946 "traddr": "10.0.0.3", 00:13:52.946 "adrfam": "ipv4", 00:13:52.946 "trsvcid": "4420", 00:13:52.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.946 "prchk_reftag": false, 00:13:52.946 "prchk_guard": false, 00:13:52.946 "hdgst": false, 00:13:52.946 "ddgst": false, 00:13:52.946 "psk": "key0", 00:13:52.946 "allow_unrecognized_csi": false, 00:13:52.946 "method": "bdev_nvme_attach_controller", 00:13:52.946 "req_id": 1 00:13:52.946 } 00:13:52.946 Got JSON-RPC error response 00:13:52.946 response: 00:13:52.946 { 00:13:52.946 "code": -126, 00:13:52.946 "message": "Required key not available" 00:13:52.946 } 00:13:52.946 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71903 00:13:52.946 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71903 ']' 00:13:52.946 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71903 00:13:52.946 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:53.204 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71903 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:53.205 killing process with pid 71903 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71903' 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71903 00:13:53.205 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.205 00:13:53.205 Latency(us) 00:13:53.205 [2024-11-04T10:03:25.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.205 [2024-11-04T10:03:25.375Z] =================================================================================================================== 00:13:53.205 [2024-11-04T10:03:25.375Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71903 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71712 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71712 ']' 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71712 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71712 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:53.205 killing process with pid 71712 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71712' 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71712 00:13:53.205 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71712 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71933 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71933 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71933 ']' 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:53.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:53.463 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.463 [2024-11-04 10:03:25.606888] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:53.463 [2024-11-04 10:03:25.606971] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.720 [2024-11-04 10:03:25.743483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.720 [2024-11-04 10:03:25.789111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.720 [2024-11-04 10:03:25.789185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.720 [2024-11-04 10:03:25.789212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.720 [2024-11-04 10:03:25.789220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.720 [2024-11-04 10:03:25.789227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.720 [2024-11-04 10:03:25.789623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.720 [2024-11-04 10:03:25.840804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:53.978 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.978 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.DB2DddFYtE 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DB2DddFYtE 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.DB2DddFYtE 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DB2DddFYtE 00:13:53.979 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.237 [2024-11-04 10:03:26.238128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.237 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.494 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:54.752 [2024-11-04 10:03:26.778291] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.752 [2024-11-04 10:03:26.778528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:54.752 10:03:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:55.010 malloc0 00:13:55.010 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.269 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:13:55.544 [2024-11-04 10:03:27.665396] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DB2DddFYtE': 0100666 00:13:55.544 [2024-11-04 10:03:27.665474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:55.544 request: 00:13:55.544 { 00:13:55.544 "name": "key0", 00:13:55.544 "path": "/tmp/tmp.DB2DddFYtE", 00:13:55.544 "method": "keyring_file_add_key", 00:13:55.544 "req_id": 1 00:13:55.544 } 00:13:55.544 Got JSON-RPC error response 00:13:55.544 response: 00:13:55.544 { 00:13:55.544 "code": -1, 00:13:55.544 "message": "Operation not permitted" 00:13:55.544 } 00:13:55.544 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:55.814 [2024-11-04 10:03:27.929490] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:55.814 [2024-11-04 10:03:27.929585] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:55.814 request: 00:13:55.814 { 00:13:55.814 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.814 "host": "nqn.2016-06.io.spdk:host1", 00:13:55.814 "psk": "key0", 00:13:55.814 "method": "nvmf_subsystem_add_host", 00:13:55.814 "req_id": 1 00:13:55.814 } 00:13:55.814 Got JSON-RPC error response 00:13:55.814 response: 00:13:55.814 { 00:13:55.814 "code": -32603, 00:13:55.814 "message": "Internal error" 00:13:55.814 } 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71933 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71933 ']' 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71933 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71933 00:13:55.814 killing process with pid 71933 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71933' 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71933 00:13:55.814 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71933 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.DB2DddFYtE 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71996 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71996 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71996 ']' 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.073 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.332 [2024-11-04 10:03:28.259425] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:56.332 [2024-11-04 10:03:28.259544] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.332 [2024-11-04 10:03:28.410775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.332 [2024-11-04 10:03:28.469109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.332 [2024-11-04 10:03:28.469184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.332 [2024-11-04 10:03:28.469211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.332 [2024-11-04 10:03:28.469220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.332 [2024-11-04 10:03:28.469227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.332 [2024-11-04 10:03:28.469635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.590 [2024-11-04 10:03:28.525092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.DB2DddFYtE 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DB2DddFYtE 00:13:57.158 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:57.416 [2024-11-04 10:03:29.553988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.416 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:57.674 10:03:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:58.242 [2024-11-04 10:03:30.126114] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:58.242 [2024-11-04 10:03:30.126364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:58.242 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:58.242 malloc0 00:13:58.645 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:58.645 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:13:58.903 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:59.162 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72056 00:13:59.162 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:59.162 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.162 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72056 /var/tmp/bdevperf.sock 00:13:59.162 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72056 ']' 00:13:59.163 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.163 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.163 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.163 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.163 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.163 [2024-11-04 10:03:31.289550] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:13:59.163 [2024-11-04 10:03:31.289715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72056 ] 00:13:59.421 [2024-11-04 10:03:31.433499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.421 [2024-11-04 10:03:31.518030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.421 [2024-11-04 10:03:31.582001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.680 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:59.680 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:59.680 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:13:59.940 10:03:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:00.199 [2024-11-04 10:03:32.152714] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.199 TLSTESTn1 00:14:00.199 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:00.767 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:00.767 "subsystems": [ 00:14:00.767 { 00:14:00.767 "subsystem": "keyring", 00:14:00.767 "config": [ 00:14:00.767 { 00:14:00.767 "method": "keyring_file_add_key", 00:14:00.767 "params": { 00:14:00.767 "name": "key0", 00:14:00.767 "path": "/tmp/tmp.DB2DddFYtE" 00:14:00.767 } 00:14:00.767 } 00:14:00.767 ] 00:14:00.767 }, 00:14:00.767 { 00:14:00.767 "subsystem": "iobuf", 00:14:00.767 "config": [ 00:14:00.767 { 00:14:00.768 "method": "iobuf_set_options", 00:14:00.768 "params": { 00:14:00.768 "small_pool_count": 8192, 00:14:00.768 "large_pool_count": 1024, 00:14:00.768 "small_bufsize": 8192, 00:14:00.768 "large_bufsize": 135168, 00:14:00.768 "enable_numa": false 00:14:00.768 } 00:14:00.768 } 00:14:00.768 ] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "sock", 00:14:00.768 "config": [ 00:14:00.768 { 00:14:00.768 "method": "sock_set_default_impl", 00:14:00.768 "params": { 00:14:00.768 "impl_name": "uring" 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "sock_impl_set_options", 00:14:00.768 "params": { 00:14:00.768 "impl_name": "ssl", 00:14:00.768 "recv_buf_size": 4096, 00:14:00.768 "send_buf_size": 4096, 00:14:00.768 "enable_recv_pipe": true, 00:14:00.768 "enable_quickack": false, 00:14:00.768 "enable_placement_id": 0, 00:14:00.768 "enable_zerocopy_send_server": true, 00:14:00.768 "enable_zerocopy_send_client": false, 00:14:00.768 "zerocopy_threshold": 0, 00:14:00.768 "tls_version": 0, 00:14:00.768 "enable_ktls": false 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "sock_impl_set_options", 00:14:00.768 "params": { 00:14:00.768 "impl_name": "posix", 00:14:00.768 "recv_buf_size": 2097152, 00:14:00.768 "send_buf_size": 2097152, 00:14:00.768 "enable_recv_pipe": true, 00:14:00.768 "enable_quickack": false, 00:14:00.768 "enable_placement_id": 0, 00:14:00.768 "enable_zerocopy_send_server": true, 00:14:00.768 "enable_zerocopy_send_client": false, 00:14:00.768 "zerocopy_threshold": 0, 00:14:00.768 "tls_version": 0, 00:14:00.768 "enable_ktls": false 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "sock_impl_set_options", 00:14:00.768 "params": { 00:14:00.768 "impl_name": "uring", 00:14:00.768 "recv_buf_size": 2097152, 00:14:00.768 "send_buf_size": 2097152, 00:14:00.768 "enable_recv_pipe": true, 00:14:00.768 "enable_quickack": false, 00:14:00.768 "enable_placement_id": 0, 00:14:00.768 "enable_zerocopy_send_server": false, 00:14:00.768 "enable_zerocopy_send_client": false, 00:14:00.768 "zerocopy_threshold": 0, 00:14:00.768 "tls_version": 0, 00:14:00.768 "enable_ktls": false 00:14:00.768 } 00:14:00.768 } 00:14:00.768 ] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "vmd", 00:14:00.768 "config": [] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "accel", 00:14:00.768 "config": [ 00:14:00.768 { 00:14:00.768 "method": "accel_set_options", 00:14:00.768 "params": { 00:14:00.768 "small_cache_size": 128, 00:14:00.768 "large_cache_size": 16, 00:14:00.768 "task_count": 2048, 00:14:00.768 "sequence_count": 2048, 00:14:00.768 "buf_count": 2048 00:14:00.768 } 00:14:00.768 } 00:14:00.768 ] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "bdev", 00:14:00.768 "config": [ 00:14:00.768 { 00:14:00.768 "method": "bdev_set_options", 00:14:00.768 "params": { 00:14:00.768 "bdev_io_pool_size": 65535, 00:14:00.768 "bdev_io_cache_size": 256, 00:14:00.768 "bdev_auto_examine": true, 00:14:00.768 "iobuf_small_cache_size": 128, 00:14:00.768 "iobuf_large_cache_size": 16 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "bdev_raid_set_options", 00:14:00.768 "params": { 00:14:00.768 "process_window_size_kb": 1024, 00:14:00.768 "process_max_bandwidth_mb_sec": 0 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "bdev_iscsi_set_options", 00:14:00.768 "params": { 00:14:00.768 "timeout_sec": 30 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "bdev_nvme_set_options", 00:14:00.768 "params": { 00:14:00.768 "action_on_timeout": "none", 00:14:00.768 "timeout_us": 0, 00:14:00.768 "timeout_admin_us": 0, 00:14:00.768 "keep_alive_timeout_ms": 10000, 00:14:00.768 "arbitration_burst": 0, 00:14:00.768 "low_priority_weight": 0, 00:14:00.768 "medium_priority_weight": 0, 00:14:00.768 "high_priority_weight": 0, 00:14:00.768 "nvme_adminq_poll_period_us": 10000, 00:14:00.768 "nvme_ioq_poll_period_us": 0, 00:14:00.768 "io_queue_requests": 0, 00:14:00.768 "delay_cmd_submit": true, 00:14:00.768 "transport_retry_count": 4, 00:14:00.768 "bdev_retry_count": 3, 00:14:00.768 "transport_ack_timeout": 0, 00:14:00.768 "ctrlr_loss_timeout_sec": 0, 00:14:00.768 "reconnect_delay_sec": 0, 00:14:00.768 "fast_io_fail_timeout_sec": 0, 00:14:00.768 "disable_auto_failback": false, 00:14:00.768 "generate_uuids": false, 00:14:00.768 "transport_tos": 0, 00:14:00.768 "nvme_error_stat": false, 00:14:00.768 "rdma_srq_size": 0, 00:14:00.768 "io_path_stat": false, 00:14:00.768 "allow_accel_sequence": false, 00:14:00.768 "rdma_max_cq_size": 0, 00:14:00.768 "rdma_cm_event_timeout_ms": 0, 00:14:00.768 "dhchap_digests": [ 00:14:00.768 "sha256", 00:14:00.768 "sha384", 00:14:00.768 "sha512" 00:14:00.768 ], 00:14:00.768 "dhchap_dhgroups": [ 00:14:00.768 "null", 00:14:00.768 "ffdhe2048", 00:14:00.768 "ffdhe3072", 00:14:00.768 "ffdhe4096", 00:14:00.768 "ffdhe6144", 00:14:00.768 "ffdhe8192" 00:14:00.768 ] 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "bdev_nvme_set_hotplug", 00:14:00.768 "params": { 00:14:00.768 "period_us": 100000, 00:14:00.768 "enable": false 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "bdev_malloc_create", 00:14:00.768 "params": { 00:14:00.768 "name": "malloc0", 00:14:00.768 "num_blocks": 8192, 00:14:00.768 "block_size": 4096, 00:14:00.768 "physical_block_size": 4096, 00:14:00.768 "uuid": "075adfcf-352b-4c0a-8da8-744f19c7bcc0", 00:14:00.768 "optimal_io_boundary": 0, 00:14:00.768 "md_size": 0, 00:14:00.768 "dif_type": 0, 00:14:00.768 "dif_is_head_of_md": false, 00:14:00.768 "dif_pi_format": 0 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "bdev_wait_for_examine" 00:14:00.768 } 00:14:00.768 ] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "nbd", 00:14:00.768 "config": [] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "scheduler", 00:14:00.768 "config": [ 00:14:00.768 { 00:14:00.768 "method": "framework_set_scheduler", 00:14:00.768 "params": { 00:14:00.768 "name": "static" 00:14:00.768 } 00:14:00.768 } 00:14:00.768 ] 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "subsystem": "nvmf", 00:14:00.768 "config": [ 00:14:00.768 { 00:14:00.768 "method": "nvmf_set_config", 00:14:00.768 "params": { 00:14:00.768 "discovery_filter": "match_any", 00:14:00.768 "admin_cmd_passthru": { 00:14:00.768 "identify_ctrlr": false 00:14:00.768 }, 00:14:00.768 "dhchap_digests": [ 00:14:00.768 "sha256", 00:14:00.768 "sha384", 00:14:00.768 "sha512" 00:14:00.768 ], 00:14:00.768 "dhchap_dhgroups": [ 00:14:00.768 "null", 00:14:00.768 "ffdhe2048", 00:14:00.768 "ffdhe3072", 00:14:00.768 "ffdhe4096", 00:14:00.768 "ffdhe6144", 00:14:00.768 "ffdhe8192" 00:14:00.768 ] 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "nvmf_set_max_subsystems", 00:14:00.768 "params": { 00:14:00.768 "max_subsystems": 1024 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "nvmf_set_crdt", 00:14:00.768 "params": { 00:14:00.768 "crdt1": 0, 00:14:00.768 "crdt2": 0, 00:14:00.768 "crdt3": 0 00:14:00.768 } 00:14:00.768 }, 00:14:00.768 { 00:14:00.768 "method": "nvmf_create_transport", 00:14:00.768 "params": { 00:14:00.768 "trtype": "TCP", 00:14:00.769 "max_queue_depth": 128, 00:14:00.769 "max_io_qpairs_per_ctrlr": 127, 00:14:00.769 "in_capsule_data_size": 4096, 00:14:00.769 "max_io_size": 131072, 00:14:00.769 "io_unit_size": 131072, 00:14:00.769 "max_aq_depth": 128, 00:14:00.769 "num_shared_buffers": 511, 00:14:00.769 "buf_cache_size": 4294967295, 00:14:00.769 "dif_insert_or_strip": false, 00:14:00.769 "zcopy": false, 00:14:00.769 "c2h_success": false, 00:14:00.769 "sock_priority": 0, 00:14:00.769 "abort_timeout_sec": 1, 00:14:00.769 "ack_timeout": 0, 00:14:00.769 "data_wr_pool_size": 0 00:14:00.769 } 00:14:00.769 }, 00:14:00.769 { 00:14:00.769 "method": "nvmf_create_subsystem", 00:14:00.769 "params": { 00:14:00.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.769 "allow_any_host": false, 00:14:00.769 "serial_number": "SPDK00000000000001", 00:14:00.769 "model_number": "SPDK bdev Controller", 00:14:00.769 "max_namespaces": 10, 00:14:00.769 "min_cntlid": 1, 00:14:00.769 "max_cntlid": 65519, 00:14:00.769 "ana_reporting": false 00:14:00.769 } 00:14:00.769 }, 00:14:00.769 { 00:14:00.769 "method": "nvmf_subsystem_add_host", 00:14:00.769 "params": { 00:14:00.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.769 "host": "nqn.2016-06.io.spdk:host1", 00:14:00.769 "psk": "key0" 00:14:00.769 } 00:14:00.769 }, 00:14:00.769 { 00:14:00.769 "method": "nvmf_subsystem_add_ns", 00:14:00.769 "params": { 00:14:00.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.769 "namespace": { 00:14:00.769 "nsid": 1, 00:14:00.769 "bdev_name": "malloc0", 00:14:00.769 "nguid": "075ADFCF352B4C0A8DA8744F19C7BCC0", 00:14:00.769 "uuid": "075adfcf-352b-4c0a-8da8-744f19c7bcc0", 00:14:00.769 "no_auto_visible": false 00:14:00.769 } 00:14:00.769 } 00:14:00.769 }, 00:14:00.769 { 00:14:00.769 "method": "nvmf_subsystem_add_listener", 00:14:00.769 "params": { 00:14:00.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.769 "listen_address": { 00:14:00.769 "trtype": "TCP", 00:14:00.769 "adrfam": "IPv4", 00:14:00.769 "traddr": "10.0.0.3", 00:14:00.769 "trsvcid": "4420" 00:14:00.769 }, 00:14:00.769 "secure_channel": true 00:14:00.769 } 00:14:00.769 } 00:14:00.769 ] 00:14:00.769 } 00:14:00.769 ] 00:14:00.769 }' 00:14:00.769 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:01.029 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:01.029 "subsystems": [ 00:14:01.029 { 00:14:01.029 "subsystem": "keyring", 00:14:01.029 "config": [ 00:14:01.029 { 00:14:01.029 "method": "keyring_file_add_key", 00:14:01.029 "params": { 00:14:01.029 "name": "key0", 00:14:01.029 "path": "/tmp/tmp.DB2DddFYtE" 00:14:01.029 } 00:14:01.029 } 00:14:01.029 ] 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "subsystem": "iobuf", 00:14:01.029 "config": [ 00:14:01.029 { 00:14:01.029 "method": "iobuf_set_options", 00:14:01.029 "params": { 00:14:01.029 "small_pool_count": 8192, 00:14:01.029 "large_pool_count": 1024, 00:14:01.029 "small_bufsize": 8192, 00:14:01.029 "large_bufsize": 135168, 00:14:01.029 "enable_numa": false 00:14:01.029 } 00:14:01.029 } 00:14:01.029 ] 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "subsystem": "sock", 00:14:01.029 "config": [ 00:14:01.029 { 00:14:01.029 "method": "sock_set_default_impl", 00:14:01.029 "params": { 00:14:01.029 "impl_name": "uring" 00:14:01.029 } 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "method": "sock_impl_set_options", 00:14:01.029 "params": { 00:14:01.029 "impl_name": "ssl", 00:14:01.029 "recv_buf_size": 4096, 00:14:01.029 "send_buf_size": 4096, 00:14:01.029 "enable_recv_pipe": true, 00:14:01.029 "enable_quickack": false, 00:14:01.029 "enable_placement_id": 0, 00:14:01.029 "enable_zerocopy_send_server": true, 00:14:01.029 "enable_zerocopy_send_client": false, 00:14:01.029 "zerocopy_threshold": 0, 00:14:01.029 "tls_version": 0, 00:14:01.029 "enable_ktls": false 00:14:01.029 } 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "method": "sock_impl_set_options", 00:14:01.029 "params": { 00:14:01.029 "impl_name": "posix", 00:14:01.029 "recv_buf_size": 2097152, 00:14:01.029 "send_buf_size": 2097152, 00:14:01.029 "enable_recv_pipe": true, 00:14:01.029 "enable_quickack": false, 00:14:01.029 "enable_placement_id": 0, 00:14:01.029 "enable_zerocopy_send_server": true, 00:14:01.029 "enable_zerocopy_send_client": false, 00:14:01.029 "zerocopy_threshold": 0, 00:14:01.029 "tls_version": 0, 00:14:01.029 "enable_ktls": false 00:14:01.029 } 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "method": "sock_impl_set_options", 00:14:01.029 "params": { 00:14:01.029 "impl_name": "uring", 00:14:01.029 "recv_buf_size": 2097152, 00:14:01.029 "send_buf_size": 2097152, 00:14:01.029 "enable_recv_pipe": true, 00:14:01.029 "enable_quickack": false, 00:14:01.029 "enable_placement_id": 0, 00:14:01.029 "enable_zerocopy_send_server": false, 00:14:01.029 "enable_zerocopy_send_client": false, 00:14:01.029 "zerocopy_threshold": 0, 00:14:01.029 "tls_version": 0, 00:14:01.029 "enable_ktls": false 00:14:01.029 } 00:14:01.029 } 00:14:01.029 ] 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "subsystem": "vmd", 00:14:01.029 "config": [] 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "subsystem": "accel", 00:14:01.029 "config": [ 00:14:01.029 { 00:14:01.029 "method": "accel_set_options", 00:14:01.029 "params": { 00:14:01.029 "small_cache_size": 128, 00:14:01.029 "large_cache_size": 16, 00:14:01.029 "task_count": 2048, 00:14:01.029 "sequence_count": 2048, 00:14:01.029 "buf_count": 2048 00:14:01.029 } 00:14:01.029 } 00:14:01.029 ] 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "subsystem": "bdev", 00:14:01.029 "config": [ 00:14:01.029 { 00:14:01.029 "method": "bdev_set_options", 00:14:01.029 "params": { 00:14:01.029 "bdev_io_pool_size": 65535, 00:14:01.029 "bdev_io_cache_size": 256, 00:14:01.029 "bdev_auto_examine": true, 00:14:01.029 "iobuf_small_cache_size": 128, 00:14:01.029 "iobuf_large_cache_size": 16 00:14:01.029 } 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "method": "bdev_raid_set_options", 00:14:01.029 "params": { 00:14:01.029 "process_window_size_kb": 1024, 00:14:01.029 "process_max_bandwidth_mb_sec": 0 00:14:01.029 } 00:14:01.029 }, 00:14:01.029 { 00:14:01.029 "method": "bdev_iscsi_set_options", 00:14:01.030 "params": { 00:14:01.030 "timeout_sec": 30 00:14:01.030 } 00:14:01.030 }, 00:14:01.030 { 00:14:01.030 "method": "bdev_nvme_set_options", 00:14:01.030 "params": { 00:14:01.030 "action_on_timeout": "none", 00:14:01.030 "timeout_us": 0, 00:14:01.030 "timeout_admin_us": 0, 00:14:01.030 "keep_alive_timeout_ms": 10000, 00:14:01.030 "arbitration_burst": 0, 00:14:01.030 "low_priority_weight": 0, 00:14:01.030 "medium_priority_weight": 0, 00:14:01.030 "high_priority_weight": 0, 00:14:01.030 "nvme_adminq_poll_period_us": 10000, 00:14:01.030 "nvme_ioq_poll_period_us": 0, 00:14:01.030 "io_queue_requests": 512, 00:14:01.030 "delay_cmd_submit": true, 00:14:01.030 "transport_retry_count": 4, 00:14:01.030 "bdev_retry_count": 3, 00:14:01.030 "transport_ack_timeout": 0, 00:14:01.030 "ctrlr_loss_timeout_sec": 0, 00:14:01.030 "reconnect_delay_sec": 0, 00:14:01.030 "fast_io_fail_timeout_sec": 0, 00:14:01.030 "disable_auto_failback": false, 00:14:01.030 "generate_uuids": false, 00:14:01.030 "transport_tos": 0, 00:14:01.030 "nvme_error_stat": false, 00:14:01.030 "rdma_srq_size": 0, 00:14:01.030 "io_path_stat": false, 00:14:01.030 "allow_accel_sequence": false, 00:14:01.030 "rdma_max_cq_size": 0, 00:14:01.030 "rdma_cm_event_timeout_ms": 0, 00:14:01.030 "dhchap_digests": [ 00:14:01.030 "sha256", 00:14:01.030 "sha384", 00:14:01.030 "sha512" 00:14:01.030 ], 00:14:01.030 "dhchap_dhgroups": [ 00:14:01.030 "null", 00:14:01.030 "ffdhe2048", 00:14:01.030 "ffdhe3072", 00:14:01.030 "ffdhe4096", 00:14:01.030 "ffdhe6144", 00:14:01.030 "ffdhe8192" 00:14:01.030 ] 00:14:01.030 } 00:14:01.030 }, 00:14:01.030 { 00:14:01.030 "method": "bdev_nvme_attach_controller", 00:14:01.030 "params": { 00:14:01.030 "name": "TLSTEST", 00:14:01.030 "trtype": "TCP", 00:14:01.030 "adrfam": "IPv4", 00:14:01.030 "traddr": "10.0.0.3", 00:14:01.030 "trsvcid": "4420", 00:14:01.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.030 "prchk_reftag": false, 00:14:01.030 "prchk_guard": false, 00:14:01.030 "ctrlr_loss_timeout_sec": 0, 00:14:01.030 "reconnect_delay_sec": 0, 00:14:01.030 "fast_io_fail_timeout_sec": 0, 00:14:01.030 "psk": "key0", 00:14:01.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.030 "hdgst": false, 00:14:01.030 "ddgst": false, 00:14:01.030 "multipath": "multipath" 00:14:01.030 } 00:14:01.030 }, 00:14:01.030 { 00:14:01.030 "method": "bdev_nvme_set_hotplug", 00:14:01.030 "params": { 00:14:01.030 "period_us": 100000, 00:14:01.030 "enable": false 00:14:01.030 } 00:14:01.030 }, 00:14:01.030 { 00:14:01.030 "method": "bdev_wait_for_examine" 00:14:01.030 } 00:14:01.030 ] 00:14:01.030 }, 00:14:01.030 { 00:14:01.030 "subsystem": "nbd", 00:14:01.030 "config": [] 00:14:01.030 } 00:14:01.030 ] 00:14:01.030 }' 00:14:01.030 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72056 00:14:01.030 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72056 ']' 00:14:01.030 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72056 00:14:01.030 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:01.030 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.030 10:03:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72056 00:14:01.030 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:01.030 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:01.030 killing process with pid 72056 00:14:01.030 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72056' 00:14:01.030 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72056 00:14:01.030 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72056 00:14:01.030 Received shutdown signal, test time was about 10.000000 seconds 00:14:01.030 00:14:01.030 Latency(us) 00:14:01.030 [2024-11-04T10:03:33.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.030 [2024-11-04T10:03:33.200Z] =================================================================================================================== 00:14:01.030 [2024-11-04T10:03:33.200Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71996 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71996 ']' 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71996 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71996 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:01.290 killing process with pid 71996 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71996' 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71996 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71996 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.290 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:01.290 "subsystems": [ 00:14:01.290 { 00:14:01.290 "subsystem": "keyring", 00:14:01.290 "config": [ 00:14:01.290 { 00:14:01.290 "method": "keyring_file_add_key", 00:14:01.290 "params": { 00:14:01.290 "name": "key0", 00:14:01.290 "path": "/tmp/tmp.DB2DddFYtE" 00:14:01.290 } 00:14:01.290 } 00:14:01.290 ] 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "subsystem": "iobuf", 00:14:01.290 "config": [ 00:14:01.290 { 00:14:01.290 "method": "iobuf_set_options", 00:14:01.290 "params": { 00:14:01.290 "small_pool_count": 8192, 00:14:01.290 "large_pool_count": 1024, 00:14:01.290 "small_bufsize": 8192, 00:14:01.290 "large_bufsize": 135168, 00:14:01.290 "enable_numa": false 00:14:01.290 } 00:14:01.290 } 00:14:01.290 ] 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "subsystem": "sock", 00:14:01.290 "config": [ 00:14:01.290 { 00:14:01.290 "method": "sock_set_default_impl", 00:14:01.290 "params": { 00:14:01.290 "impl_name": "uring" 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "sock_impl_set_options", 00:14:01.290 "params": { 00:14:01.290 "impl_name": "ssl", 00:14:01.290 "recv_buf_size": 4096, 00:14:01.290 "send_buf_size": 4096, 00:14:01.290 "enable_recv_pipe": true, 00:14:01.290 "enable_quickack": false, 00:14:01.290 "enable_placement_id": 0, 00:14:01.290 "enable_zerocopy_send_server": true, 00:14:01.290 "enable_zerocopy_send_client": false, 00:14:01.290 "zerocopy_threshold": 0, 00:14:01.290 "tls_version": 0, 00:14:01.290 "enable_ktls": false 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "sock_impl_set_options", 00:14:01.290 "params": { 00:14:01.290 "impl_name": "posix", 00:14:01.290 "recv_buf_size": 2097152, 00:14:01.290 "send_buf_size": 2097152, 00:14:01.290 "enable_recv_pipe": true, 00:14:01.290 "enable_quickack": false, 00:14:01.290 "enable_placement_id": 0, 00:14:01.290 "enable_zerocopy_send_server": true, 00:14:01.290 "enable_zerocopy_send_client": false, 00:14:01.290 "zerocopy_threshold": 0, 00:14:01.290 "tls_version": 0, 00:14:01.290 "enable_ktls": false 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "sock_impl_set_options", 00:14:01.290 "params": { 00:14:01.290 "impl_name": "uring", 00:14:01.290 "recv_buf_size": 2097152, 00:14:01.290 "send_buf_size": 2097152, 00:14:01.290 "enable_recv_pipe": true, 00:14:01.290 "enable_quickack": false, 00:14:01.290 "enable_placement_id": 0, 00:14:01.290 "enable_zerocopy_send_server": false, 00:14:01.290 "enable_zerocopy_send_client": false, 00:14:01.290 "zerocopy_threshold": 0, 00:14:01.290 "tls_version": 0, 00:14:01.290 "enable_ktls": false 00:14:01.290 } 00:14:01.290 } 00:14:01.290 ] 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "subsystem": "vmd", 00:14:01.290 "config": [] 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "subsystem": "accel", 00:14:01.290 "config": [ 00:14:01.290 { 00:14:01.290 "method": "accel_set_options", 00:14:01.290 "params": { 00:14:01.290 "small_cache_size": 128, 00:14:01.290 "large_cache_size": 16, 00:14:01.290 "task_count": 2048, 00:14:01.290 "sequence_count": 2048, 00:14:01.290 "buf_count": 2048 00:14:01.290 } 00:14:01.290 } 00:14:01.290 ] 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "subsystem": "bdev", 00:14:01.290 "config": [ 00:14:01.290 { 00:14:01.290 "method": "bdev_set_options", 00:14:01.290 "params": { 00:14:01.290 "bdev_io_pool_size": 65535, 00:14:01.290 "bdev_io_cache_size": 256, 00:14:01.290 "bdev_auto_examine": true, 00:14:01.290 "iobuf_small_cache_size": 128, 00:14:01.290 "iobuf_large_cache_size": 16 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "bdev_raid_set_options", 00:14:01.290 "params": { 00:14:01.290 "process_window_size_kb": 1024, 00:14:01.290 "process_max_bandwidth_mb_sec": 0 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "bdev_iscsi_set_options", 00:14:01.290 "params": { 00:14:01.290 "timeout_sec": 30 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "bdev_nvme_set_options", 00:14:01.290 "params": { 00:14:01.290 "action_on_timeout": "none", 00:14:01.290 "timeout_us": 0, 00:14:01.290 "timeout_admin_us": 0, 00:14:01.290 "keep_alive_timeout_ms": 10000, 00:14:01.290 "arbitration_burst": 0, 00:14:01.290 "low_priority_weight": 0, 00:14:01.290 "medium_priority_weight": 0, 00:14:01.290 "high_priority_weight": 0, 00:14:01.290 "nvme_adminq_poll_period_us": 10000, 00:14:01.290 "nvme_ioq_poll_period_us": 0, 00:14:01.290 "io_queue_requests": 0, 00:14:01.290 "delay_cmd_submit": true, 00:14:01.290 "transport_retry_count": 4, 00:14:01.290 "bdev_retry_count": 3, 00:14:01.290 "transport_ack_timeout": 0, 00:14:01.290 "ctrlr_loss_timeout_sec": 0, 00:14:01.290 "reconnect_delay_sec": 0, 00:14:01.290 "fast_io_fail_timeout_sec": 0, 00:14:01.290 "disable_auto_failback": false, 00:14:01.290 "generate_uuids": false, 00:14:01.290 "transport_tos": 0, 00:14:01.290 "nvme_error_stat": false, 00:14:01.290 "rdma_srq_size": 0, 00:14:01.290 "io_path_stat": false, 00:14:01.290 "allow_accel_sequence": false, 00:14:01.290 "rdma_max_cq_size": 0, 00:14:01.290 "rdma_cm_event_timeout_ms": 0, 00:14:01.290 "dhchap_digests": [ 00:14:01.290 "sha256", 00:14:01.290 "sha384", 00:14:01.290 "sha512" 00:14:01.290 ], 00:14:01.290 "dhchap_dhgroups": [ 00:14:01.290 "null", 00:14:01.290 "ffdhe2048", 00:14:01.290 "ffdhe3072", 00:14:01.290 "ffdhe4096", 00:14:01.290 "ffdhe6144", 00:14:01.290 "ffdhe8192" 00:14:01.290 ] 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "bdev_nvme_set_hotplug", 00:14:01.290 "params": { 00:14:01.290 "period_us": 100000, 00:14:01.290 "enable": false 00:14:01.290 } 00:14:01.290 }, 00:14:01.290 { 00:14:01.290 "method": "bdev_malloc_create", 00:14:01.290 "params": { 00:14:01.290 "name": "malloc0", 00:14:01.290 "num_blocks": 8192, 00:14:01.291 "block_size": 4096, 00:14:01.291 "physical_block_size": 4096, 00:14:01.291 "uuid": "075adfcf-352b-4c0a-8da8-744f19c7bcc0", 00:14:01.291 "optimal_io_boundary": 0, 00:14:01.291 "md_size": 0, 00:14:01.291 "dif_type": 0, 00:14:01.291 "dif_is_head_of_md": false, 00:14:01.291 "dif_pi_format": 0 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "bdev_wait_for_examine" 00:14:01.291 } 00:14:01.291 ] 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "subsystem": "nbd", 00:14:01.291 "config": [] 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "subsystem": "scheduler", 00:14:01.291 "config": [ 00:14:01.291 { 00:14:01.291 "method": "framework_set_scheduler", 00:14:01.291 "params": { 00:14:01.291 "name": "static" 00:14:01.291 } 00:14:01.291 } 00:14:01.291 ] 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "subsystem": "nvmf", 00:14:01.291 "config": [ 00:14:01.291 { 00:14:01.291 "method": "nvmf_set_config", 00:14:01.291 "params": { 00:14:01.291 "discovery_filter": "match_any", 00:14:01.291 "admin_cmd_passthru": { 00:14:01.291 "identify_ctrlr": false 00:14:01.291 }, 00:14:01.291 "dhchap_digests": [ 00:14:01.291 "sha256", 00:14:01.291 "sha384", 00:14:01.291 "sha512" 00:14:01.291 ], 00:14:01.291 "dhchap_dhgroups": [ 00:14:01.291 "null", 00:14:01.291 "ffdhe2048", 00:14:01.291 "ffdhe3072", 00:14:01.291 "ffdhe4096", 00:14:01.291 "ffdhe6144", 00:14:01.291 "ffdhe8192" 00:14:01.291 ] 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_set_max_subsystems", 00:14:01.291 "params": { 00:14:01.291 "max_subsystems": 1024 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_set_crdt", 00:14:01.291 "params": { 00:14:01.291 "crdt1": 0, 00:14:01.291 "crdt2": 0, 00:14:01.291 "crdt3": 0 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_create_transport", 00:14:01.291 "params": { 00:14:01.291 "trtype": "TCP", 00:14:01.291 "max_queue_depth": 128, 00:14:01.291 "max_io_qpairs_per_ctrlr": 127, 00:14:01.291 "in_capsule_data_size": 4096, 00:14:01.291 "max_io_size": 131072, 00:14:01.291 "io_unit_size": 131072, 00:14:01.291 "max_aq_depth": 128, 00:14:01.291 "num_shared_buffers": 511, 00:14:01.291 "buf_cache_size": 4294967295, 00:14:01.291 "dif_insert_or_strip": false, 00:14:01.291 "zcopy": false, 00:14:01.291 "c2h_success": false, 00:14:01.291 "sock_priority": 0, 00:14:01.291 "abort_timeout_sec": 1, 00:14:01.291 "ack_timeout": 0, 00:14:01.291 "data_wr_pool_size": 0 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_create_subsystem", 00:14:01.291 "params": { 00:14:01.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.291 "allow_any_host": false, 00:14:01.291 "serial_number": "SPDK00000000000001", 00:14:01.291 "model_number": "SPDK bdev Controller", 00:14:01.291 "max_namespaces": 10, 00:14:01.291 "min_cntlid": 1, 00:14:01.291 "max_cntlid": 65519, 00:14:01.291 "ana_reporting": false 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_subsystem_add_host", 00:14:01.291 "params": { 00:14:01.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.291 "host": "nqn.2016-06.io.spdk:host1", 00:14:01.291 "psk": "key0" 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_subsystem_add_ns", 00:14:01.291 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:01.291 "params": { 00:14:01.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.291 "namespace": { 00:14:01.291 "nsid": 1, 00:14:01.291 "bdev_name": "malloc0", 00:14:01.291 "nguid": "075ADFCF352B4C0A8DA8744F19C7BCC0", 00:14:01.291 "uuid": "075adfcf-352b-4c0a-8da8-744f19c7bcc0", 00:14:01.291 "no_auto_visible": false 00:14:01.291 } 00:14:01.291 } 00:14:01.291 }, 00:14:01.291 { 00:14:01.291 "method": "nvmf_subsystem_add_listener", 00:14:01.291 "params": { 00:14:01.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.291 "listen_address": { 00:14:01.291 "trtype": "TCP", 00:14:01.291 "adrfam": "IPv4", 00:14:01.291 "traddr": "10.0.0.3", 00:14:01.291 "trsvcid": "4420" 00:14:01.291 }, 00:14:01.291 "secure_channel": true 00:14:01.291 } 00:14:01.291 } 00:14:01.291 ] 00:14:01.291 } 00:14:01.291 ] 00:14:01.291 }' 00:14:01.291 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72095 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72095 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72095 ']' 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.551 10:03:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.551 [2024-11-04 10:03:33.516868] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:01.551 [2024-11-04 10:03:33.516967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.551 [2024-11-04 10:03:33.663567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.551 [2024-11-04 10:03:33.717135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.551 [2024-11-04 10:03:33.717230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.551 [2024-11-04 10:03:33.717241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.551 [2024-11-04 10:03:33.717250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.551 [2024-11-04 10:03:33.717258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.551 [2024-11-04 10:03:33.717734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.810 [2024-11-04 10:03:33.887352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.810 [2024-11-04 10:03:33.967891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.068 [2024-11-04 10:03:33.999822] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:02.069 [2024-11-04 10:03:34.000049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:02.328 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.328 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:02.328 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.328 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.328 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72126 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72126 /var/tmp/bdevperf.sock 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72126 ']' 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:02.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:02.587 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:02.587 "subsystems": [ 00:14:02.587 { 00:14:02.587 "subsystem": "keyring", 00:14:02.587 "config": [ 00:14:02.587 { 00:14:02.587 "method": "keyring_file_add_key", 00:14:02.587 "params": { 00:14:02.587 "name": "key0", 00:14:02.587 "path": "/tmp/tmp.DB2DddFYtE" 00:14:02.587 } 00:14:02.587 } 00:14:02.587 ] 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "subsystem": "iobuf", 00:14:02.587 "config": [ 00:14:02.587 { 00:14:02.587 "method": "iobuf_set_options", 00:14:02.587 "params": { 00:14:02.587 "small_pool_count": 8192, 00:14:02.587 "large_pool_count": 1024, 00:14:02.587 "small_bufsize": 8192, 00:14:02.587 "large_bufsize": 135168, 00:14:02.587 "enable_numa": false 00:14:02.587 } 00:14:02.587 } 00:14:02.587 ] 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "subsystem": "sock", 00:14:02.587 "config": [ 00:14:02.587 { 00:14:02.587 "method": "sock_set_default_impl", 00:14:02.587 "params": { 00:14:02.587 "impl_name": "uring" 00:14:02.587 } 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "method": "sock_impl_set_options", 00:14:02.587 "params": { 00:14:02.587 "impl_name": "ssl", 00:14:02.587 "recv_buf_size": 4096, 00:14:02.587 "send_buf_size": 4096, 00:14:02.587 "enable_recv_pipe": true, 00:14:02.587 "enable_quickack": false, 00:14:02.587 "enable_placement_id": 0, 00:14:02.587 "enable_zerocopy_send_server": true, 00:14:02.587 "enable_zerocopy_send_client": false, 00:14:02.587 "zerocopy_threshold": 0, 00:14:02.587 "tls_version": 0, 00:14:02.587 "enable_ktls": false 00:14:02.587 } 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "method": "sock_impl_set_options", 00:14:02.587 "params": { 00:14:02.587 "impl_name": "posix", 00:14:02.587 "recv_buf_size": 2097152, 00:14:02.587 "send_buf_size": 2097152, 00:14:02.587 "enable_recv_pipe": true, 00:14:02.587 "enable_quickack": false, 00:14:02.587 "enable_placement_id": 0, 00:14:02.587 "enable_zerocopy_send_server": true, 00:14:02.587 "enable_zerocopy_send_client": false, 00:14:02.587 "zerocopy_threshold": 0, 00:14:02.587 "tls_version": 0, 00:14:02.587 "enable_ktls": false 00:14:02.587 } 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "method": "sock_impl_set_options", 00:14:02.587 "params": { 00:14:02.587 "impl_name": "uring", 00:14:02.587 "recv_buf_size": 2097152, 00:14:02.587 "send_buf_size": 2097152, 00:14:02.587 "enable_recv_pipe": true, 00:14:02.587 "enable_quickack": false, 00:14:02.587 "enable_placement_id": 0, 00:14:02.587 "enable_zerocopy_send_server": false, 00:14:02.587 "enable_zerocopy_send_client": false, 00:14:02.587 "zerocopy_threshold": 0, 00:14:02.587 "tls_version": 0, 00:14:02.587 "enable_ktls": false 00:14:02.587 } 00:14:02.587 } 00:14:02.587 ] 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "subsystem": "vmd", 00:14:02.587 "config": [] 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "subsystem": "accel", 00:14:02.587 "config": [ 00:14:02.587 { 00:14:02.587 "method": "accel_set_options", 00:14:02.587 "params": { 00:14:02.587 "small_cache_size": 128, 00:14:02.587 "large_cache_size": 16, 00:14:02.587 "task_count": 2048, 00:14:02.587 "sequence_count": 2048, 00:14:02.587 "buf_count": 2048 00:14:02.587 } 00:14:02.587 } 00:14:02.587 ] 00:14:02.587 }, 00:14:02.587 { 00:14:02.587 "subsystem": "bdev", 00:14:02.587 "config": [ 00:14:02.587 { 00:14:02.587 "method": "bdev_set_options", 00:14:02.587 "params": { 00:14:02.587 "bdev_io_pool_size": 65535, 00:14:02.587 "bdev_io_cache_size": 256, 00:14:02.587 "bdev_auto_examine": true, 00:14:02.588 "iobuf_small_cache_size": 128, 00:14:02.588 "iobuf_large_cache_size": 16 00:14:02.588 } 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "method": "bdev_raid_set_options", 00:14:02.588 "params": { 00:14:02.588 "process_window_size_kb": 1024, 00:14:02.588 "process_max_bandwidth_mb_sec": 0 00:14:02.588 } 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "method": "bdev_iscsi_set_options", 00:14:02.588 "params": { 00:14:02.588 "timeout_sec": 30 00:14:02.588 } 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "method": "bdev_nvme_set_options", 00:14:02.588 "params": { 00:14:02.588 "action_on_timeout": "none", 00:14:02.588 "timeout_us": 0, 00:14:02.588 "timeout_admin_us": 0, 00:14:02.588 "keep_alive_timeout_ms": 10000, 00:14:02.588 "arbitration_burst": 0, 00:14:02.588 "low_priority_weight": 0, 00:14:02.588 "medium_priority_weight": 0, 00:14:02.588 "high_priority_weight": 0, 00:14:02.588 "nvme_adminq_poll_period_us": 10000, 00:14:02.588 "nvme_ioq_poll_period_us": 0, 00:14:02.588 "io_queue_requests": 512, 00:14:02.588 "delay_cmd_submit": true, 00:14:02.588 "transport_retry_count": 4, 00:14:02.588 "bdev_retry_count": 3, 00:14:02.588 "transport_ack_timeout": 0, 00:14:02.588 "ctrlr_loss_timeout_sec": 0, 00:14:02.588 "reconnect_delay_sec": 0, 00:14:02.588 "fast_io_fail_timeout_sec": 0, 00:14:02.588 "disable_auto_failback": false, 00:14:02.588 "generate_uuids": false, 00:14:02.588 "transport_tos": 0, 00:14:02.588 "nvme_error_stat": false, 00:14:02.588 "rdma_srq_size": 0, 00:14:02.588 "io_path_stat": false, 00:14:02.588 "allow_accel_sequence": false, 00:14:02.588 "rdma_max_cq_size": 0, 00:14:02.588 "rdma_cm_event_timeout_ms": 0, 00:14:02.588 "dhchap_digests": [ 00:14:02.588 "sha256", 00:14:02.588 "sha384", 00:14:02.588 "sha512" 00:14:02.588 ], 00:14:02.588 "dhchap_dhgroups": [ 00:14:02.588 "null", 00:14:02.588 "ffdhe2048", 00:14:02.588 "ffdhe3072", 00:14:02.588 "ffdhe4096", 00:14:02.588 "ffdhe6144", 00:14:02.588 "ffdhe8192" 00:14:02.588 ] 00:14:02.588 } 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "method": "bdev_nvme_attach_controller", 00:14:02.588 "params": { 00:14:02.588 "name": "TLSTEST", 00:14:02.588 "trtype": "TCP", 00:14:02.588 "adrfam": "IPv4", 00:14:02.588 "traddr": "10.0.0.3", 00:14:02.588 "trsvcid": "4420", 00:14:02.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.588 "prchk_reftag": false, 00:14:02.588 "prchk_guard": false, 00:14:02.588 "ctrlr_loss_timeout_sec": 0, 00:14:02.588 "reconnect_delay_sec": 0, 00:14:02.588 "fast_io_fail_timeout_sec": 0, 00:14:02.588 "psk": "key0", 00:14:02.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.588 "hdgst": false, 00:14:02.588 "ddgst": false, 00:14:02.588 "multipath": "multipath" 00:14:02.588 } 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "method": "bdev_nvme_set_hotplug", 00:14:02.588 "params": { 00:14:02.588 "period_us": 100000, 00:14:02.588 "enable": false 00:14:02.588 } 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "method": "bdev_wait_for_examine" 00:14:02.588 } 00:14:02.588 ] 00:14:02.588 }, 00:14:02.588 { 00:14:02.588 "subsystem": "nbd", 00:14:02.588 "config": [] 00:14:02.588 } 00:14:02.588 ] 00:14:02.588 }' 00:14:02.588 [2024-11-04 10:03:34.583726] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:02.588 [2024-11-04 10:03:34.583858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72126 ] 00:14:02.588 [2024-11-04 10:03:34.736648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.847 [2024-11-04 10:03:34.800843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.847 [2024-11-04 10:03:34.941202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:02.847 [2024-11-04 10:03:34.990492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.783 10:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.783 10:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:03.783 10:03:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:03.783 Running I/O for 10 seconds... 00:14:05.654 4008.00 IOPS, 15.66 MiB/s [2024-11-04T10:03:38.760Z] 4025.50 IOPS, 15.72 MiB/s [2024-11-04T10:03:40.167Z] 4056.67 IOPS, 15.85 MiB/s [2024-11-04T10:03:41.109Z] 4059.50 IOPS, 15.86 MiB/s [2024-11-04T10:03:42.045Z] 4065.20 IOPS, 15.88 MiB/s [2024-11-04T10:03:42.981Z] 4095.83 IOPS, 16.00 MiB/s [2024-11-04T10:03:43.915Z] 4147.00 IOPS, 16.20 MiB/s [2024-11-04T10:03:44.851Z] 4156.62 IOPS, 16.24 MiB/s [2024-11-04T10:03:45.786Z] 4136.00 IOPS, 16.16 MiB/s [2024-11-04T10:03:45.786Z] 4099.50 IOPS, 16.01 MiB/s 00:14:13.616 Latency(us) 00:14:13.616 [2024-11-04T10:03:45.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.616 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.616 Verification LBA range: start 0x0 length 0x2000 00:14:13.616 TLSTESTn1 : 10.02 4105.00 16.04 0.00 0.00 31122.63 5689.72 27167.65 00:14:13.616 [2024-11-04T10:03:45.786Z] =================================================================================================================== 00:14:13.616 [2024-11-04T10:03:45.786Z] Total : 4105.00 16.04 0.00 0.00 31122.63 5689.72 27167.65 00:14:13.616 { 00:14:13.616 "results": [ 00:14:13.616 { 00:14:13.616 "job": "TLSTESTn1", 00:14:13.616 "core_mask": "0x4", 00:14:13.616 "workload": "verify", 00:14:13.616 "status": "finished", 00:14:13.616 "verify_range": { 00:14:13.616 "start": 0, 00:14:13.616 "length": 8192 00:14:13.616 }, 00:14:13.616 "queue_depth": 128, 00:14:13.616 "io_size": 4096, 00:14:13.616 "runtime": 10.01754, 00:14:13.616 "iops": 4104.999830297658, 00:14:13.616 "mibps": 16.035155587100228, 00:14:13.616 "io_failed": 0, 00:14:13.616 "io_timeout": 0, 00:14:13.616 "avg_latency_us": 31122.62830088738, 00:14:13.616 "min_latency_us": 5689.716363636364, 00:14:13.616 "max_latency_us": 27167.65090909091 00:14:13.616 } 00:14:13.616 ], 00:14:13.616 "core_count": 1 00:14:13.616 } 00:14:13.616 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.616 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72126 00:14:13.616 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72126 ']' 00:14:13.616 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72126 00:14:13.616 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72126 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:13.874 killing process with pid 72126 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72126' 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72126 00:14:13.874 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.874 00:14:13.874 Latency(us) 00:14:13.874 [2024-11-04T10:03:46.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.874 [2024-11-04T10:03:46.044Z] =================================================================================================================== 00:14:13.874 [2024-11-04T10:03:46.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.874 10:03:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72126 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72095 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72095 ']' 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72095 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72095 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:14.133 killing process with pid 72095 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72095' 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72095 00:14:14.133 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72095 00:14:14.391 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:14.391 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.391 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.391 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.391 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72270 00:14:14.391 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72270 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72270 ']' 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.392 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.392 [2024-11-04 10:03:46.405413] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:14.392 [2024-11-04 10:03:46.405525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.392 [2024-11-04 10:03:46.559172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.650 [2024-11-04 10:03:46.629507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.650 [2024-11-04 10:03:46.629620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.650 [2024-11-04 10:03:46.629644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.650 [2024-11-04 10:03:46.629654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.650 [2024-11-04 10:03:46.629664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.650 [2024-11-04 10:03:46.630143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.650 [2024-11-04 10:03:46.690855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.DB2DddFYtE 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DB2DddFYtE 00:14:15.586 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:15.844 [2024-11-04 10:03:47.803404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.844 10:03:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:16.103 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:16.361 [2024-11-04 10:03:48.347531] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:16.361 [2024-11-04 10:03:48.347889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:16.361 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:16.620 malloc0 00:14:16.620 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:16.879 10:03:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:14:17.138 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72326 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72326 /var/tmp/bdevperf.sock 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72326 ']' 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:17.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:17.397 10:03:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.397 [2024-11-04 10:03:49.524003] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:17.397 [2024-11-04 10:03:49.524657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72326 ] 00:14:17.662 [2024-11-04 10:03:49.673543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.662 [2024-11-04 10:03:49.742655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.662 [2024-11-04 10:03:49.803757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.598 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:18.598 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:18.598 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:14:18.857 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:19.114 [2024-11-04 10:03:51.063657] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.114 nvme0n1 00:14:19.114 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.114 Running I/O for 1 seconds... 00:14:20.489 3821.00 IOPS, 14.93 MiB/s 00:14:20.489 Latency(us) 00:14:20.489 [2024-11-04T10:03:52.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.489 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:20.489 Verification LBA range: start 0x0 length 0x2000 00:14:20.489 nvme0n1 : 1.02 3887.30 15.18 0.00 0.00 32617.93 6017.40 28240.06 00:14:20.489 [2024-11-04T10:03:52.659Z] =================================================================================================================== 00:14:20.489 [2024-11-04T10:03:52.659Z] Total : 3887.30 15.18 0.00 0.00 32617.93 6017.40 28240.06 00:14:20.489 { 00:14:20.489 "results": [ 00:14:20.489 { 00:14:20.489 "job": "nvme0n1", 00:14:20.489 "core_mask": "0x2", 00:14:20.489 "workload": "verify", 00:14:20.489 "status": "finished", 00:14:20.489 "verify_range": { 00:14:20.489 "start": 0, 00:14:20.489 "length": 8192 00:14:20.489 }, 00:14:20.489 "queue_depth": 128, 00:14:20.489 "io_size": 4096, 00:14:20.489 "runtime": 1.016129, 00:14:20.489 "iops": 3887.301710707991, 00:14:20.489 "mibps": 15.18477230745309, 00:14:20.489 "io_failed": 0, 00:14:20.489 "io_timeout": 0, 00:14:20.489 "avg_latency_us": 32617.9308556962, 00:14:20.489 "min_latency_us": 6017.396363636363, 00:14:20.489 "max_latency_us": 28240.05818181818 00:14:20.489 } 00:14:20.489 ], 00:14:20.489 "core_count": 1 00:14:20.489 } 00:14:20.489 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72326 00:14:20.489 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72326 ']' 00:14:20.489 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72326 00:14:20.489 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:20.489 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72326 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:20.490 killing process with pid 72326 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72326' 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72326 00:14:20.490 Received shutdown signal, test time was about 1.000000 seconds 00:14:20.490 00:14:20.490 Latency(us) 00:14:20.490 [2024-11-04T10:03:52.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.490 [2024-11-04T10:03:52.660Z] =================================================================================================================== 00:14:20.490 [2024-11-04T10:03:52.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72326 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72270 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72270 ']' 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72270 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72270 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.490 killing process with pid 72270 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72270' 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72270 00:14:20.490 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72270 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72377 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72377 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72377 ']' 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:20.749 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.749 [2024-11-04 10:03:52.836761] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:20.749 [2024-11-04 10:03:52.836892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.008 [2024-11-04 10:03:52.987298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.008 [2024-11-04 10:03:53.046298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.008 [2024-11-04 10:03:53.046366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.008 [2024-11-04 10:03:53.046378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.008 [2024-11-04 10:03:53.046387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.008 [2024-11-04 10:03:53.046395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.008 [2024-11-04 10:03:53.046807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.008 [2024-11-04 10:03:53.101002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.008 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.008 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:21.008 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:21.008 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.008 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.267 [2024-11-04 10:03:53.204733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.267 malloc0 00:14:21.267 [2024-11-04 10:03:53.236257] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.267 [2024-11-04 10:03:53.236507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72401 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72401 /var/tmp/bdevperf.sock 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72401 ']' 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.267 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.267 [2024-11-04 10:03:53.326379] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:21.267 [2024-11-04 10:03:53.326507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:14:21.525 [2024-11-04 10:03:53.475363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.526 [2024-11-04 10:03:53.551577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.526 [2024-11-04 10:03:53.613036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.473 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.473 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:22.473 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DB2DddFYtE 00:14:22.473 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:22.746 [2024-11-04 10:03:54.842701] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.746 nvme0n1 00:14:23.004 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:23.004 Running I/O for 1 seconds... 00:14:24.197 3968.00 IOPS, 15.50 MiB/s 00:14:24.197 Latency(us) 00:14:24.197 [2024-11-04T10:03:56.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.198 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.198 Verification LBA range: start 0x0 length 0x2000 00:14:24.198 nvme0n1 : 1.02 4008.87 15.66 0.00 0.00 31581.16 10009.13 22163.08 00:14:24.198 [2024-11-04T10:03:56.368Z] =================================================================================================================== 00:14:24.198 [2024-11-04T10:03:56.368Z] Total : 4008.87 15.66 0.00 0.00 31581.16 10009.13 22163.08 00:14:24.198 { 00:14:24.198 "results": [ 00:14:24.198 { 00:14:24.198 "job": "nvme0n1", 00:14:24.198 "core_mask": "0x2", 00:14:24.198 "workload": "verify", 00:14:24.198 "status": "finished", 00:14:24.198 "verify_range": { 00:14:24.198 "start": 0, 00:14:24.198 "length": 8192 00:14:24.198 }, 00:14:24.198 "queue_depth": 128, 00:14:24.198 "io_size": 4096, 00:14:24.198 "runtime": 1.021735, 00:14:24.198 "iops": 4008.867269888963, 00:14:24.198 "mibps": 15.659637773003762, 00:14:24.198 "io_failed": 0, 00:14:24.198 "io_timeout": 0, 00:14:24.198 "avg_latency_us": 31581.163636363635, 00:14:24.198 "min_latency_us": 10009.134545454546, 00:14:24.198 "max_latency_us": 22163.083636363637 00:14:24.198 } 00:14:24.198 ], 00:14:24.198 "core_count": 1 00:14:24.198 } 00:14:24.198 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:24.198 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.198 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.198 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.198 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:24.198 "subsystems": [ 00:14:24.198 { 00:14:24.198 "subsystem": "keyring", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "keyring_file_add_key", 00:14:24.198 "params": { 00:14:24.198 "name": "key0", 00:14:24.198 "path": "/tmp/tmp.DB2DddFYtE" 00:14:24.198 } 00:14:24.198 } 00:14:24.198 ] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "iobuf", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "iobuf_set_options", 00:14:24.198 "params": { 00:14:24.198 "small_pool_count": 8192, 00:14:24.198 "large_pool_count": 1024, 00:14:24.198 "small_bufsize": 8192, 00:14:24.198 "large_bufsize": 135168, 00:14:24.198 "enable_numa": false 00:14:24.198 } 00:14:24.198 } 00:14:24.198 ] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "sock", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "sock_set_default_impl", 00:14:24.198 "params": { 00:14:24.198 "impl_name": "uring" 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "sock_impl_set_options", 00:14:24.198 "params": { 00:14:24.198 "impl_name": "ssl", 00:14:24.198 "recv_buf_size": 4096, 00:14:24.198 "send_buf_size": 4096, 00:14:24.198 "enable_recv_pipe": true, 00:14:24.198 "enable_quickack": false, 00:14:24.198 "enable_placement_id": 0, 00:14:24.198 "enable_zerocopy_send_server": true, 00:14:24.198 "enable_zerocopy_send_client": false, 00:14:24.198 "zerocopy_threshold": 0, 00:14:24.198 "tls_version": 0, 00:14:24.198 "enable_ktls": false 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "sock_impl_set_options", 00:14:24.198 "params": { 00:14:24.198 "impl_name": "posix", 00:14:24.198 "recv_buf_size": 2097152, 00:14:24.198 "send_buf_size": 2097152, 00:14:24.198 "enable_recv_pipe": true, 00:14:24.198 "enable_quickack": false, 00:14:24.198 "enable_placement_id": 0, 00:14:24.198 "enable_zerocopy_send_server": true, 00:14:24.198 "enable_zerocopy_send_client": false, 00:14:24.198 "zerocopy_threshold": 0, 00:14:24.198 "tls_version": 0, 00:14:24.198 "enable_ktls": false 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "sock_impl_set_options", 00:14:24.198 "params": { 00:14:24.198 "impl_name": "uring", 00:14:24.198 "recv_buf_size": 2097152, 00:14:24.198 "send_buf_size": 2097152, 00:14:24.198 "enable_recv_pipe": true, 00:14:24.198 "enable_quickack": false, 00:14:24.198 "enable_placement_id": 0, 00:14:24.198 "enable_zerocopy_send_server": false, 00:14:24.198 "enable_zerocopy_send_client": false, 00:14:24.198 "zerocopy_threshold": 0, 00:14:24.198 "tls_version": 0, 00:14:24.198 "enable_ktls": false 00:14:24.198 } 00:14:24.198 } 00:14:24.198 ] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "vmd", 00:14:24.198 "config": [] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "accel", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "accel_set_options", 00:14:24.198 "params": { 00:14:24.198 "small_cache_size": 128, 00:14:24.198 "large_cache_size": 16, 00:14:24.198 "task_count": 2048, 00:14:24.198 "sequence_count": 2048, 00:14:24.198 "buf_count": 2048 00:14:24.198 } 00:14:24.198 } 00:14:24.198 ] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "bdev", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "bdev_set_options", 00:14:24.198 "params": { 00:14:24.198 "bdev_io_pool_size": 65535, 00:14:24.198 "bdev_io_cache_size": 256, 00:14:24.198 "bdev_auto_examine": true, 00:14:24.198 "iobuf_small_cache_size": 128, 00:14:24.198 "iobuf_large_cache_size": 16 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "bdev_raid_set_options", 00:14:24.198 "params": { 00:14:24.198 "process_window_size_kb": 1024, 00:14:24.198 "process_max_bandwidth_mb_sec": 0 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "bdev_iscsi_set_options", 00:14:24.198 "params": { 00:14:24.198 "timeout_sec": 30 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "bdev_nvme_set_options", 00:14:24.198 "params": { 00:14:24.198 "action_on_timeout": "none", 00:14:24.198 "timeout_us": 0, 00:14:24.198 "timeout_admin_us": 0, 00:14:24.198 "keep_alive_timeout_ms": 10000, 00:14:24.198 "arbitration_burst": 0, 00:14:24.198 "low_priority_weight": 0, 00:14:24.198 "medium_priority_weight": 0, 00:14:24.198 "high_priority_weight": 0, 00:14:24.198 "nvme_adminq_poll_period_us": 10000, 00:14:24.198 "nvme_ioq_poll_period_us": 0, 00:14:24.198 "io_queue_requests": 0, 00:14:24.198 "delay_cmd_submit": true, 00:14:24.198 "transport_retry_count": 4, 00:14:24.198 "bdev_retry_count": 3, 00:14:24.198 "transport_ack_timeout": 0, 00:14:24.198 "ctrlr_loss_timeout_sec": 0, 00:14:24.198 "reconnect_delay_sec": 0, 00:14:24.198 "fast_io_fail_timeout_sec": 0, 00:14:24.198 "disable_auto_failback": false, 00:14:24.198 "generate_uuids": false, 00:14:24.198 "transport_tos": 0, 00:14:24.198 "nvme_error_stat": false, 00:14:24.198 "rdma_srq_size": 0, 00:14:24.198 "io_path_stat": false, 00:14:24.198 "allow_accel_sequence": false, 00:14:24.198 "rdma_max_cq_size": 0, 00:14:24.198 "rdma_cm_event_timeout_ms": 0, 00:14:24.198 "dhchap_digests": [ 00:14:24.198 "sha256", 00:14:24.198 "sha384", 00:14:24.198 "sha512" 00:14:24.198 ], 00:14:24.198 "dhchap_dhgroups": [ 00:14:24.198 "null", 00:14:24.198 "ffdhe2048", 00:14:24.198 "ffdhe3072", 00:14:24.198 "ffdhe4096", 00:14:24.198 "ffdhe6144", 00:14:24.198 "ffdhe8192" 00:14:24.198 ] 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "bdev_nvme_set_hotplug", 00:14:24.198 "params": { 00:14:24.198 "period_us": 100000, 00:14:24.198 "enable": false 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "bdev_malloc_create", 00:14:24.198 "params": { 00:14:24.198 "name": "malloc0", 00:14:24.198 "num_blocks": 8192, 00:14:24.198 "block_size": 4096, 00:14:24.198 "physical_block_size": 4096, 00:14:24.198 "uuid": "88b5fe89-d94f-4df3-a274-d4b741377e40", 00:14:24.198 "optimal_io_boundary": 0, 00:14:24.198 "md_size": 0, 00:14:24.198 "dif_type": 0, 00:14:24.198 "dif_is_head_of_md": false, 00:14:24.198 "dif_pi_format": 0 00:14:24.198 } 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "method": "bdev_wait_for_examine" 00:14:24.198 } 00:14:24.198 ] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "nbd", 00:14:24.198 "config": [] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "scheduler", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "framework_set_scheduler", 00:14:24.198 "params": { 00:14:24.198 "name": "static" 00:14:24.198 } 00:14:24.198 } 00:14:24.198 ] 00:14:24.198 }, 00:14:24.198 { 00:14:24.198 "subsystem": "nvmf", 00:14:24.198 "config": [ 00:14:24.198 { 00:14:24.198 "method": "nvmf_set_config", 00:14:24.198 "params": { 00:14:24.198 "discovery_filter": "match_any", 00:14:24.198 "admin_cmd_passthru": { 00:14:24.198 "identify_ctrlr": false 00:14:24.198 }, 00:14:24.198 "dhchap_digests": [ 00:14:24.198 "sha256", 00:14:24.198 "sha384", 00:14:24.198 "sha512" 00:14:24.198 ], 00:14:24.198 "dhchap_dhgroups": [ 00:14:24.198 "null", 00:14:24.199 "ffdhe2048", 00:14:24.199 "ffdhe3072", 00:14:24.199 "ffdhe4096", 00:14:24.199 "ffdhe6144", 00:14:24.199 "ffdhe8192" 00:14:24.199 ] 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_set_max_subsystems", 00:14:24.199 "params": { 00:14:24.199 "max_subsystems": 1024 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_set_crdt", 00:14:24.199 "params": { 00:14:24.199 "crdt1": 0, 00:14:24.199 "crdt2": 0, 00:14:24.199 "crdt3": 0 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_create_transport", 00:14:24.199 "params": { 00:14:24.199 "trtype": "TCP", 00:14:24.199 "max_queue_depth": 128, 00:14:24.199 "max_io_qpairs_per_ctrlr": 127, 00:14:24.199 "in_capsule_data_size": 4096, 00:14:24.199 "max_io_size": 131072, 00:14:24.199 "io_unit_size": 131072, 00:14:24.199 "max_aq_depth": 128, 00:14:24.199 "num_shared_buffers": 511, 00:14:24.199 "buf_cache_size": 4294967295, 00:14:24.199 "dif_insert_or_strip": false, 00:14:24.199 "zcopy": false, 00:14:24.199 "c2h_success": false, 00:14:24.199 "sock_priority": 0, 00:14:24.199 "abort_timeout_sec": 1, 00:14:24.199 "ack_timeout": 0, 00:14:24.199 "data_wr_pool_size": 0 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_create_subsystem", 00:14:24.199 "params": { 00:14:24.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.199 "allow_any_host": false, 00:14:24.199 "serial_number": "00000000000000000000", 00:14:24.199 "model_number": "SPDK bdev Controller", 00:14:24.199 "max_namespaces": 32, 00:14:24.199 "min_cntlid": 1, 00:14:24.199 "max_cntlid": 65519, 00:14:24.199 "ana_reporting": false 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_subsystem_add_host", 00:14:24.199 "params": { 00:14:24.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.199 "host": "nqn.2016-06.io.spdk:host1", 00:14:24.199 "psk": "key0" 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_subsystem_add_ns", 00:14:24.199 "params": { 00:14:24.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.199 "namespace": { 00:14:24.199 "nsid": 1, 00:14:24.199 "bdev_name": "malloc0", 00:14:24.199 "nguid": "88B5FE89D94F4DF3A274D4B741377E40", 00:14:24.199 "uuid": "88b5fe89-d94f-4df3-a274-d4b741377e40", 00:14:24.199 "no_auto_visible": false 00:14:24.199 } 00:14:24.199 } 00:14:24.199 }, 00:14:24.199 { 00:14:24.199 "method": "nvmf_subsystem_add_listener", 00:14:24.199 "params": { 00:14:24.199 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.199 "listen_address": { 00:14:24.199 "trtype": "TCP", 00:14:24.199 "adrfam": "IPv4", 00:14:24.199 "traddr": "10.0.0.3", 00:14:24.199 "trsvcid": "4420" 00:14:24.199 }, 00:14:24.199 "secure_channel": false, 00:14:24.199 "sock_impl": "ssl" 00:14:24.199 } 00:14:24.199 } 00:14:24.199 ] 00:14:24.199 } 00:14:24.199 ] 00:14:24.199 }' 00:14:24.199 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:24.767 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:24.767 "subsystems": [ 00:14:24.767 { 00:14:24.767 "subsystem": "keyring", 00:14:24.767 "config": [ 00:14:24.767 { 00:14:24.767 "method": "keyring_file_add_key", 00:14:24.767 "params": { 00:14:24.767 "name": "key0", 00:14:24.767 "path": "/tmp/tmp.DB2DddFYtE" 00:14:24.767 } 00:14:24.767 } 00:14:24.767 ] 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "subsystem": "iobuf", 00:14:24.767 "config": [ 00:14:24.767 { 00:14:24.767 "method": "iobuf_set_options", 00:14:24.767 "params": { 00:14:24.767 "small_pool_count": 8192, 00:14:24.767 "large_pool_count": 1024, 00:14:24.767 "small_bufsize": 8192, 00:14:24.767 "large_bufsize": 135168, 00:14:24.767 "enable_numa": false 00:14:24.767 } 00:14:24.767 } 00:14:24.767 ] 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "subsystem": "sock", 00:14:24.767 "config": [ 00:14:24.767 { 00:14:24.767 "method": "sock_set_default_impl", 00:14:24.767 "params": { 00:14:24.767 "impl_name": "uring" 00:14:24.767 } 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "method": "sock_impl_set_options", 00:14:24.767 "params": { 00:14:24.767 "impl_name": "ssl", 00:14:24.767 "recv_buf_size": 4096, 00:14:24.767 "send_buf_size": 4096, 00:14:24.767 "enable_recv_pipe": true, 00:14:24.767 "enable_quickack": false, 00:14:24.767 "enable_placement_id": 0, 00:14:24.767 "enable_zerocopy_send_server": true, 00:14:24.767 "enable_zerocopy_send_client": false, 00:14:24.767 "zerocopy_threshold": 0, 00:14:24.767 "tls_version": 0, 00:14:24.767 "enable_ktls": false 00:14:24.767 } 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "method": "sock_impl_set_options", 00:14:24.767 "params": { 00:14:24.767 "impl_name": "posix", 00:14:24.767 "recv_buf_size": 2097152, 00:14:24.767 "send_buf_size": 2097152, 00:14:24.767 "enable_recv_pipe": true, 00:14:24.767 "enable_quickack": false, 00:14:24.767 "enable_placement_id": 0, 00:14:24.767 "enable_zerocopy_send_server": true, 00:14:24.767 "enable_zerocopy_send_client": false, 00:14:24.767 "zerocopy_threshold": 0, 00:14:24.767 "tls_version": 0, 00:14:24.767 "enable_ktls": false 00:14:24.767 } 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "method": "sock_impl_set_options", 00:14:24.767 "params": { 00:14:24.767 "impl_name": "uring", 00:14:24.767 "recv_buf_size": 2097152, 00:14:24.767 "send_buf_size": 2097152, 00:14:24.767 "enable_recv_pipe": true, 00:14:24.767 "enable_quickack": false, 00:14:24.767 "enable_placement_id": 0, 00:14:24.767 "enable_zerocopy_send_server": false, 00:14:24.767 "enable_zerocopy_send_client": false, 00:14:24.767 "zerocopy_threshold": 0, 00:14:24.767 "tls_version": 0, 00:14:24.767 "enable_ktls": false 00:14:24.767 } 00:14:24.767 } 00:14:24.767 ] 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "subsystem": "vmd", 00:14:24.767 "config": [] 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "subsystem": "accel", 00:14:24.767 "config": [ 00:14:24.767 { 00:14:24.767 "method": "accel_set_options", 00:14:24.767 "params": { 00:14:24.767 "small_cache_size": 128, 00:14:24.767 "large_cache_size": 16, 00:14:24.767 "task_count": 2048, 00:14:24.767 "sequence_count": 2048, 00:14:24.767 "buf_count": 2048 00:14:24.767 } 00:14:24.767 } 00:14:24.767 ] 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "subsystem": "bdev", 00:14:24.767 "config": [ 00:14:24.767 { 00:14:24.767 "method": "bdev_set_options", 00:14:24.767 "params": { 00:14:24.767 "bdev_io_pool_size": 65535, 00:14:24.767 "bdev_io_cache_size": 256, 00:14:24.767 "bdev_auto_examine": true, 00:14:24.767 "iobuf_small_cache_size": 128, 00:14:24.767 "iobuf_large_cache_size": 16 00:14:24.767 } 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "method": "bdev_raid_set_options", 00:14:24.767 "params": { 00:14:24.767 "process_window_size_kb": 1024, 00:14:24.767 "process_max_bandwidth_mb_sec": 0 00:14:24.767 } 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "method": "bdev_iscsi_set_options", 00:14:24.767 "params": { 00:14:24.767 "timeout_sec": 30 00:14:24.767 } 00:14:24.767 }, 00:14:24.767 { 00:14:24.767 "method": "bdev_nvme_set_options", 00:14:24.767 "params": { 00:14:24.767 "action_on_timeout": "none", 00:14:24.767 "timeout_us": 0, 00:14:24.767 "timeout_admin_us": 0, 00:14:24.767 "keep_alive_timeout_ms": 10000, 00:14:24.767 "arbitration_burst": 0, 00:14:24.767 "low_priority_weight": 0, 00:14:24.767 "medium_priority_weight": 0, 00:14:24.767 "high_priority_weight": 0, 00:14:24.767 "nvme_adminq_poll_period_us": 10000, 00:14:24.767 "nvme_ioq_poll_period_us": 0, 00:14:24.767 "io_queue_requests": 512, 00:14:24.767 "delay_cmd_submit": true, 00:14:24.767 "transport_retry_count": 4, 00:14:24.767 "bdev_retry_count": 3, 00:14:24.767 "transport_ack_timeout": 0, 00:14:24.767 "ctrlr_loss_timeout_sec": 0, 00:14:24.767 "reconnect_delay_sec": 0, 00:14:24.767 "fast_io_fail_timeout_sec": 0, 00:14:24.767 "disable_auto_failback": false, 00:14:24.767 "generate_uuids": false, 00:14:24.768 "transport_tos": 0, 00:14:24.768 "nvme_error_stat": false, 00:14:24.768 "rdma_srq_size": 0, 00:14:24.768 "io_path_stat": false, 00:14:24.768 "allow_accel_sequence": false, 00:14:24.768 "rdma_max_cq_size": 0, 00:14:24.768 "rdma_cm_event_timeout_ms": 0, 00:14:24.768 "dhchap_digests": [ 00:14:24.768 "sha256", 00:14:24.768 "sha384", 00:14:24.768 "sha512" 00:14:24.768 ], 00:14:24.768 "dhchap_dhgroups": [ 00:14:24.768 "null", 00:14:24.768 "ffdhe2048", 00:14:24.768 "ffdhe3072", 00:14:24.768 "ffdhe4096", 00:14:24.768 "ffdhe6144", 00:14:24.768 "ffdhe8192" 00:14:24.768 ] 00:14:24.768 } 00:14:24.768 }, 00:14:24.768 { 00:14:24.768 "method": "bdev_nvme_attach_controller", 00:14:24.768 "params": { 00:14:24.768 "name": "nvme0", 00:14:24.768 "trtype": "TCP", 00:14:24.768 "adrfam": "IPv4", 00:14:24.768 "traddr": "10.0.0.3", 00:14:24.768 "trsvcid": "4420", 00:14:24.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.768 "prchk_reftag": false, 00:14:24.768 "prchk_guard": false, 00:14:24.768 "ctrlr_loss_timeout_sec": 0, 00:14:24.768 "reconnect_delay_sec": 0, 00:14:24.768 "fast_io_fail_timeout_sec": 0, 00:14:24.768 "psk": "key0", 00:14:24.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:24.768 "hdgst": false, 00:14:24.768 "ddgst": false, 00:14:24.768 "multipath": "multipath" 00:14:24.768 } 00:14:24.768 }, 00:14:24.768 { 00:14:24.768 "method": "bdev_nvme_set_hotplug", 00:14:24.768 "params": { 00:14:24.768 "period_us": 100000, 00:14:24.768 "enable": false 00:14:24.768 } 00:14:24.768 }, 00:14:24.768 { 00:14:24.768 "method": "bdev_enable_histogram", 00:14:24.768 "params": { 00:14:24.768 "name": "nvme0n1", 00:14:24.768 "enable": true 00:14:24.768 } 00:14:24.768 }, 00:14:24.768 { 00:14:24.768 "method": "bdev_wait_for_examine" 00:14:24.768 } 00:14:24.768 ] 00:14:24.768 }, 00:14:24.768 { 00:14:24.768 "subsystem": "nbd", 00:14:24.768 "config": [] 00:14:24.768 } 00:14:24.768 ] 00:14:24.768 }' 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72401 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72401 ']' 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72401 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72401 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:24.768 killing process with pid 72401 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72401' 00:14:24.768 Received shutdown signal, test time was about 1.000000 seconds 00:14:24.768 00:14:24.768 Latency(us) 00:14:24.768 [2024-11-04T10:03:56.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.768 [2024-11-04T10:03:56.938Z] =================================================================================================================== 00:14:24.768 [2024-11-04T10:03:56.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72401 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72401 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72377 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72377 ']' 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72377 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72377 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:24.768 killing process with pid 72377 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72377' 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72377 00:14:24.768 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72377 00:14:25.027 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:25.027 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.027 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.027 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:25.027 "subsystems": [ 00:14:25.027 { 00:14:25.027 "subsystem": "keyring", 00:14:25.027 "config": [ 00:14:25.027 { 00:14:25.027 "method": "keyring_file_add_key", 00:14:25.027 "params": { 00:14:25.027 "name": "key0", 00:14:25.027 "path": "/tmp/tmp.DB2DddFYtE" 00:14:25.027 } 00:14:25.027 } 00:14:25.027 ] 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "subsystem": "iobuf", 00:14:25.027 "config": [ 00:14:25.027 { 00:14:25.027 "method": "iobuf_set_options", 00:14:25.027 "params": { 00:14:25.027 "small_pool_count": 8192, 00:14:25.027 "large_pool_count": 1024, 00:14:25.027 "small_bufsize": 8192, 00:14:25.027 "large_bufsize": 135168, 00:14:25.027 "enable_numa": false 00:14:25.027 } 00:14:25.027 } 00:14:25.027 ] 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "subsystem": "sock", 00:14:25.027 "config": [ 00:14:25.027 { 00:14:25.027 "method": "sock_set_default_impl", 00:14:25.027 "params": { 00:14:25.027 "impl_name": "uring" 00:14:25.027 } 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "method": "sock_impl_set_options", 00:14:25.027 "params": { 00:14:25.027 "impl_name": "ssl", 00:14:25.027 "recv_buf_size": 4096, 00:14:25.027 "send_buf_size": 4096, 00:14:25.027 "enable_recv_pipe": true, 00:14:25.027 "enable_quickack": false, 00:14:25.027 "enable_placement_id": 0, 00:14:25.027 "enable_zerocopy_send_server": true, 00:14:25.027 "enable_zerocopy_send_client": false, 00:14:25.027 "zerocopy_threshold": 0, 00:14:25.027 "tls_version": 0, 00:14:25.027 "enable_ktls": false 00:14:25.027 } 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "method": "sock_impl_set_options", 00:14:25.027 "params": { 00:14:25.027 "impl_name": "posix", 00:14:25.027 "recv_buf_size": 2097152, 00:14:25.027 "send_buf_size": 2097152, 00:14:25.027 "enable_recv_pipe": true, 00:14:25.027 "enable_quickack": false, 00:14:25.027 "enable_placement_id": 0, 00:14:25.027 "enable_zerocopy_send_server": true, 00:14:25.027 "enable_zerocopy_send_client": false, 00:14:25.027 "zerocopy_threshold": 0, 00:14:25.027 "tls_version": 0, 00:14:25.027 "enable_ktls": false 00:14:25.027 } 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "method": "sock_impl_set_options", 00:14:25.027 "params": { 00:14:25.027 "impl_name": "uring", 00:14:25.027 "recv_buf_size": 2097152, 00:14:25.027 "send_buf_size": 2097152, 00:14:25.027 "enable_recv_pipe": true, 00:14:25.027 "enable_quickack": false, 00:14:25.027 "enable_placement_id": 0, 00:14:25.027 "enable_zerocopy_send_server": false, 00:14:25.027 "enable_zerocopy_send_client": false, 00:14:25.027 "zerocopy_threshold": 0, 00:14:25.027 "tls_version": 0, 00:14:25.027 "enable_ktls": false 00:14:25.027 } 00:14:25.027 } 00:14:25.027 ] 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "subsystem": "vmd", 00:14:25.027 "config": [] 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "subsystem": "accel", 00:14:25.027 "config": [ 00:14:25.027 { 00:14:25.027 "method": "accel_set_options", 00:14:25.027 "params": { 00:14:25.027 "small_cache_size": 128, 00:14:25.027 "large_cache_size": 16, 00:14:25.027 "task_count": 2048, 00:14:25.027 "sequence_count": 2048, 00:14:25.027 "buf_count": 2048 00:14:25.027 } 00:14:25.027 } 00:14:25.027 ] 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "subsystem": "bdev", 00:14:25.027 "config": [ 00:14:25.027 { 00:14:25.027 "method": "bdev_set_options", 00:14:25.027 "params": { 00:14:25.027 "bdev_io_pool_size": 65535, 00:14:25.027 "bdev_io_cache_size": 256, 00:14:25.027 "bdev_auto_examine": true, 00:14:25.027 "iobuf_small_cache_size": 128, 00:14:25.027 "iobuf_large_cache_size": 16 00:14:25.027 } 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "method": "bdev_raid_set_options", 00:14:25.027 "params": { 00:14:25.027 "process_window_size_kb": 1024, 00:14:25.027 "process_max_bandwidth_mb_sec": 0 00:14:25.027 } 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "method": "bdev_iscsi_set_options", 00:14:25.027 "params": { 00:14:25.027 "timeout_sec": 30 00:14:25.027 } 00:14:25.027 }, 00:14:25.027 { 00:14:25.027 "method": "bdev_nvme_set_options", 00:14:25.027 "params": { 00:14:25.028 "action_on_timeout": "none", 00:14:25.028 "timeout_us": 0, 00:14:25.028 "timeout_admin_us": 0, 00:14:25.028 "keep_alive_timeout_ms": 10000, 00:14:25.028 "arbitration_burst": 0, 00:14:25.028 "low_priority_weight": 0, 00:14:25.028 "medium_priority_weight": 0, 00:14:25.028 "high_priority_weight": 0, 00:14:25.028 "nvme_adminq_poll_period_us": 10000, 00:14:25.028 "nvme_ioq_poll_period_us": 0, 00:14:25.028 "io_queue_requests": 0, 00:14:25.028 "delay_cmd_submit": true, 00:14:25.028 "transport_retry_count": 4, 00:14:25.028 "bdev_retry_count": 3, 00:14:25.028 "transport_ack_timeout": 0, 00:14:25.028 "ctrlr_loss_timeout_sec": 0, 00:14:25.028 "reconnect_delay_sec": 0, 00:14:25.028 "fast_io_fail_timeout_sec": 0, 00:14:25.028 "disable_auto_failback": false, 00:14:25.028 "generate_uuids": false, 00:14:25.028 "transport_tos": 0, 00:14:25.028 "nvme_error_stat": false, 00:14:25.028 "rdma_srq_size": 0, 00:14:25.028 "io_path_stat": false, 00:14:25.028 "allow_accel_sequence": false, 00:14:25.028 "rdma_max_cq_size": 0, 00:14:25.028 "rdma_cm_event_timeout_ms": 0, 00:14:25.028 "dhchap_digests": [ 00:14:25.028 "sha256", 00:14:25.028 "sha384", 00:14:25.028 "sha512" 00:14:25.028 ], 00:14:25.028 "dhchap_dhgroups": [ 00:14:25.028 "null", 00:14:25.028 "ffdhe2048", 00:14:25.028 "ffdhe3072", 00:14:25.028 "ffdhe4096", 00:14:25.028 "ffdhe6144", 00:14:25.028 "ffdhe8192" 00:14:25.028 ] 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "bdev_nvme_set_hotplug", 00:14:25.028 "params": { 00:14:25.028 "period_us": 100000, 00:14:25.028 "enable": false 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "bdev_malloc_create", 00:14:25.028 "params": { 00:14:25.028 "name": "malloc0", 00:14:25.028 "num_blocks": 8192, 00:14:25.028 "block_size": 4096, 00:14:25.028 "physical_block_size": 4096, 00:14:25.028 "uuid": "88b5fe89-d94f-4df3-a274-d4b741377e40", 00:14:25.028 "optimal_io_boundary": 0, 00:14:25.028 "md_size": 0, 00:14:25.028 "dif_type": 0, 00:14:25.028 "dif_is_head_of_md": false, 00:14:25.028 "dif_pi_format": 0 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "bdev_wait_for_examine" 00:14:25.028 } 00:14:25.028 ] 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "subsystem": "nbd", 00:14:25.028 "config": [] 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "subsystem": "scheduler", 00:14:25.028 "config": [ 00:14:25.028 { 00:14:25.028 "method": "framework_set_scheduler", 00:14:25.028 "params": { 00:14:25.028 "name": "static" 00:14:25.028 } 00:14:25.028 } 00:14:25.028 ] 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "subsystem": "nvmf", 00:14:25.028 "config": [ 00:14:25.028 { 00:14:25.028 "method": "nvmf_set_config", 00:14:25.028 "params": { 00:14:25.028 "discovery_filter": "match_any", 00:14:25.028 "admin_cmd_passthru": { 00:14:25.028 "identify_ctrlr": false 00:14:25.028 }, 00:14:25.028 "dhchap_digests": [ 00:14:25.028 "sha256", 00:14:25.028 "sha384", 00:14:25.028 "sha512" 00:14:25.028 ], 00:14:25.028 "dhchap_dhgroups": [ 00:14:25.028 "null", 00:14:25.028 "ffdhe2048", 00:14:25.028 "ffdhe3072", 00:14:25.028 "ffdhe4096", 00:14:25.028 "ffdhe6144", 00:14:25.028 "ffdhe8192" 00:14:25.028 ] 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_set_max_subsystems", 00:14:25.028 "params": { 00:14:25.028 "max_subsystems": 1024 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_set_crdt", 00:14:25.028 "params": { 00:14:25.028 "crdt1": 0, 00:14:25.028 "crdt2": 0, 00:14:25.028 "crdt3": 0 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_create_transport", 00:14:25.028 "params": { 00:14:25.028 "trtype": "TCP", 00:14:25.028 "max_queue_depth": 128, 00:14:25.028 "max_io_qpairs_per_ctrlr": 127, 00:14:25.028 "in_capsule_data_size": 4096, 00:14:25.028 "max_io_size": 131072, 00:14:25.028 "io_unit_size": 131072, 00:14:25.028 "max_aq_depth": 128, 00:14:25.028 "num_shared_buffers": 511, 00:14:25.028 "buf_cache_size": 4294967295, 00:14:25.028 "dif_insert_or_strip": false, 00:14:25.028 "zcopy": false, 00:14:25.028 "c2h_success": false, 00:14:25.028 "sock_priority": 0, 00:14:25.028 "abort_timeout_sec": 1, 00:14:25.028 "ack_timeout": 0, 00:14:25.028 "data_wr_pool_size": 0 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_create_subsystem", 00:14:25.028 "params": { 00:14:25.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.028 "allow_any_host": false, 00:14:25.028 "serial_number": "00000000000000000000", 00:14:25.028 "model_number": "SPDK bdev Controller", 00:14:25.028 "max_namespaces": 32, 00:14:25.028 "min_cntlid": 1, 00:14:25.028 "max_cntlid": 65519, 00:14:25.028 "ana_reporting": false 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_subsystem_add_host", 00:14:25.028 "params": { 00:14:25.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.028 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.028 "psk": "key0" 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_subsystem_add_ns", 00:14:25.028 "params": { 00:14:25.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.028 "namespace": { 00:14:25.028 "nsid": 1, 00:14:25.028 "bdev_name": "malloc0", 00:14:25.028 "nguid": "88B5FE89D94F4DF3A274D4B741377E40", 00:14:25.028 "uuid": "88b5fe89-d94f-4df3-a274-d4b741377e40", 00:14:25.028 "no_auto_visible": false 00:14:25.028 } 00:14:25.028 } 00:14:25.028 }, 00:14:25.028 { 00:14:25.028 "method": "nvmf_subsystem_add_listener", 00:14:25.028 "params": { 00:14:25.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.028 "listen_address": { 00:14:25.028 "trtype": "TCP", 00:14:25.028 "adrfam": "IPv4", 00:14:25.028 "traddr": "10.0.0.3", 00:14:25.028 "trsvcid": "4420" 00:14:25.028 }, 00:14:25.028 "secure_channel": false, 00:14:25.028 "sock_impl": "ssl" 00:14:25.028 } 00:14:25.028 } 00:14:25.028 ] 00:14:25.028 } 00:14:25.028 ] 00:14:25.028 }' 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72462 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72462 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72462 ']' 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.028 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.028 [2024-11-04 10:03:57.191256] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:25.028 [2024-11-04 10:03:57.191358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.287 [2024-11-04 10:03:57.341447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.287 [2024-11-04 10:03:57.406316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.287 [2024-11-04 10:03:57.406365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.287 [2024-11-04 10:03:57.406377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.287 [2024-11-04 10:03:57.406396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.287 [2024-11-04 10:03:57.406403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.287 [2024-11-04 10:03:57.406859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.545 [2024-11-04 10:03:57.575868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.545 [2024-11-04 10:03:57.656567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.545 [2024-11-04 10:03:57.688527] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.545 [2024-11-04 10:03:57.688793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.112 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72494 00:14:26.371 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72494 /var/tmp/bdevperf.sock 00:14:26.371 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72494 ']' 00:14:26.371 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.371 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:26.371 "subsystems": [ 00:14:26.371 { 00:14:26.371 "subsystem": "keyring", 00:14:26.371 "config": [ 00:14:26.371 { 00:14:26.371 "method": "keyring_file_add_key", 00:14:26.371 "params": { 00:14:26.371 "name": "key0", 00:14:26.371 "path": "/tmp/tmp.DB2DddFYtE" 00:14:26.371 } 00:14:26.371 } 00:14:26.371 ] 00:14:26.371 }, 00:14:26.371 { 00:14:26.371 "subsystem": "iobuf", 00:14:26.371 "config": [ 00:14:26.371 { 00:14:26.371 "method": "iobuf_set_options", 00:14:26.371 "params": { 00:14:26.371 "small_pool_count": 8192, 00:14:26.371 "large_pool_count": 1024, 00:14:26.371 "small_bufsize": 8192, 00:14:26.371 "large_bufsize": 135168, 00:14:26.371 "enable_numa": false 00:14:26.371 } 00:14:26.371 } 00:14:26.371 ] 00:14:26.371 }, 00:14:26.371 { 00:14:26.371 "subsystem": "sock", 00:14:26.371 "config": [ 00:14:26.371 { 00:14:26.371 "method": "sock_set_default_impl", 00:14:26.371 "params": { 00:14:26.371 "impl_name": "uring" 00:14:26.371 } 00:14:26.371 }, 00:14:26.371 { 00:14:26.371 "method": "sock_impl_set_options", 00:14:26.371 "params": { 00:14:26.371 "impl_name": "ssl", 00:14:26.371 "recv_buf_size": 4096, 00:14:26.371 "send_buf_size": 4096, 00:14:26.371 "enable_recv_pipe": true, 00:14:26.371 "enable_quickack": false, 00:14:26.372 "enable_placement_id": 0, 00:14:26.372 "enable_zerocopy_send_server": true, 00:14:26.372 "enable_zerocopy_send_client": false, 00:14:26.372 "zerocopy_threshold": 0, 00:14:26.372 "tls_version": 0, 00:14:26.372 "enable_ktls": false 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "sock_impl_set_options", 00:14:26.372 "params": { 00:14:26.372 "impl_name": "posix", 00:14:26.372 "recv_buf_size": 2097152, 00:14:26.372 "send_buf_size": 2097152, 00:14:26.372 "enable_recv_pipe": true, 00:14:26.372 "enable_quickack": false, 00:14:26.372 "enable_placement_id": 0, 00:14:26.372 "enable_zerocopy_send_server": true, 00:14:26.372 "enable_zerocopy_send_client": false, 00:14:26.372 "zerocopy_threshold": 0, 00:14:26.372 "tls_version": 0, 00:14:26.372 "enable_ktls": false 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "sock_impl_set_options", 00:14:26.372 "params": { 00:14:26.372 "impl_name": "uring", 00:14:26.372 "recv_buf_size": 2097152, 00:14:26.372 "send_buf_size": 2097152, 00:14:26.372 "enable_recv_pipe": true, 00:14:26.372 "enable_quickack": false, 00:14:26.372 "enable_placement_id": 0, 00:14:26.372 "enable_zerocopy_send_server": false, 00:14:26.372 "enable_zerocopy_send_client": false, 00:14:26.372 "zerocopy_threshold": 0, 00:14:26.372 "tls_version": 0, 00:14:26.372 "enable_ktls": false 00:14:26.372 } 00:14:26.372 } 00:14:26.372 ] 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "subsystem": "vmd", 00:14:26.372 "config": [] 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "subsystem": "accel", 00:14:26.372 "config": [ 00:14:26.372 { 00:14:26.372 "method": "accel_set_options", 00:14:26.372 "params": { 00:14:26.372 "small_cache_size": 128, 00:14:26.372 "large_cache_size": 16, 00:14:26.372 "task_count": 2048, 00:14:26.372 "sequence_count": 2048, 00:14:26.372 "buf_count": 2048 00:14:26.372 } 00:14:26.372 } 00:14:26.372 ] 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "subsystem": "bdev", 00:14:26.372 "config": [ 00:14:26.372 { 00:14:26.372 "method": "bdev_set_options", 00:14:26.372 "params": { 00:14:26.372 "bdev_io_pool_size": 65535, 00:14:26.372 "bdev_io_cache_size": 256, 00:14:26.372 "bdev_auto_examine": true, 00:14:26.372 "iobuf_small_cache_size": 128, 00:14:26.372 "iobuf_large_cache_size": 16 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_raid_set_options", 00:14:26.372 "params": { 00:14:26.372 "process_window_size_kb": 1024, 00:14:26.372 "process_max_bandwidth_mb_sec": 0 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_iscsi_set_options", 00:14:26.372 "params": { 00:14:26.372 "timeout_sec": 30 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_nvme_set_options", 00:14:26.372 "params": { 00:14:26.372 "action_on_timeout": "none", 00:14:26.372 "timeout_us": 0, 00:14:26.372 "timeout_admin_us": 0, 00:14:26.372 "keep_alive_timeout_ms": 10000, 00:14:26.372 "arbitration_burst": 0, 00:14:26.372 "low_priority_weight": 0, 00:14:26.372 "medium_priority_weight": 0, 00:14:26.372 "high_priority_weight": 0, 00:14:26.372 "nvme_adminq_poll_period_us": 10000, 00:14:26.372 "nvme_ioq_poll_period_us": 0, 00:14:26.372 "io_queue_requests": 512, 00:14:26.372 "delay_cmd_submit": true, 00:14:26.372 "transport_retry_count": 4, 00:14:26.372 "bdev_retry_count": 3, 00:14:26.372 "transport_ack_timeout": 0, 00:14:26.372 "ctrlr_loss_timeout_sec": 0, 00:14:26.372 "reconnect_delay_sec": 0, 00:14:26.372 "fast_io_fail_timeout_sec": 0, 00:14:26.372 "disable_auto_failback": false, 00:14:26.372 "generate_uuids": false, 00:14:26.372 "transport_tos": 0, 00:14:26.372 "nvme_error_stat": false, 00:14:26.372 "rdma_srq_size": 0, 00:14:26.372 "io_path_stat": false, 00:14:26.372 "allow_accel_sequence": false, 00:14:26.372 "rdma_max_cq_size": 0, 00:14:26.372 "rdma_cm_event_timeout_ms": 0, 00:14:26.372 "dhchap_digests": [ 00:14:26.372 "sha256", 00:14:26.372 "sha384", 00:14:26.372 "sha512" 00:14:26.372 ], 00:14:26.372 "dhchap_dhgroups": [ 00:14:26.372 "null", 00:14:26.372 "ffdhe2048", 00:14:26.372 "ffdhe3072", 00:14:26.372 "ffdhe4096", 00:14:26.372 "ffdhe6144", 00:14:26.372 "ffdhe8192" 00:14:26.372 ] 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_nvme_attach_controller", 00:14:26.372 "params": { 00:14:26.372 "name": "nvme0", 00:14:26.372 "trtype": "TCP", 00:14:26.372 "adrfam": "IPv4", 00:14:26.372 "traddr": "10.0.0.3", 00:14:26.372 "trsvcid": "4420", 00:14:26.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.372 "prchk_reftag": false, 00:14:26.372 "prchk_guard": false, 00:14:26.372 "ctrlr_loss_timeout_sec": 0, 00:14:26.372 "reconnect_delay_sec": 0, 00:14:26.372 "fast_io_fail_timeout_sec": 0, 00:14:26.372 "psk": "key0", 00:14:26.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.372 "hdgst": false, 00:14:26.372 "ddgst": false, 00:14:26.372 "multipath": "multipath" 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_nvme_set_hotplug", 00:14:26.372 "params": { 00:14:26.372 "period_us": 100000, 00:14:26.372 "enable": false 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_enable_histogram", 00:14:26.372 "params": { 00:14:26.372 "name": "nvme0n1", 00:14:26.372 "enable": true 00:14:26.372 } 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "method": "bdev_wait_for_examine" 00:14:26.372 } 00:14:26.372 ] 00:14:26.372 }, 00:14:26.372 { 00:14:26.372 "subsystem": "nbd", 00:14:26.372 "config": [] 00:14:26.372 } 00:14:26.372 ] 00:14:26.372 }' 00:14:26.372 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:26.372 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:26.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.373 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.373 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:26.373 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.373 [2024-11-04 10:03:58.337338] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:26.373 [2024-11-04 10:03:58.337443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72494 ] 00:14:26.373 [2024-11-04 10:03:58.490990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.631 [2024-11-04 10:03:58.555630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.631 [2024-11-04 10:03:58.696643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.631 [2024-11-04 10:03:58.747634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.197 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:27.197 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:27.197 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:27.197 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:27.529 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.529 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:27.787 Running I/O for 1 seconds... 00:14:28.722 3968.00 IOPS, 15.50 MiB/s 00:14:28.722 Latency(us) 00:14:28.722 [2024-11-04T10:04:00.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.722 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:28.722 Verification LBA range: start 0x0 length 0x2000 00:14:28.722 nvme0n1 : 1.02 3997.11 15.61 0.00 0.00 31668.12 8936.73 21805.61 00:14:28.722 [2024-11-04T10:04:00.892Z] =================================================================================================================== 00:14:28.722 [2024-11-04T10:04:00.892Z] Total : 3997.11 15.61 0.00 0.00 31668.12 8936.73 21805.61 00:14:28.722 { 00:14:28.722 "results": [ 00:14:28.722 { 00:14:28.722 "job": "nvme0n1", 00:14:28.722 "core_mask": "0x2", 00:14:28.722 "workload": "verify", 00:14:28.722 "status": "finished", 00:14:28.722 "verify_range": { 00:14:28.722 "start": 0, 00:14:28.722 "length": 8192 00:14:28.722 }, 00:14:28.722 "queue_depth": 128, 00:14:28.722 "io_size": 4096, 00:14:28.722 "runtime": 1.02474, 00:14:28.722 "iops": 3997.1114624197357, 00:14:28.722 "mibps": 15.613716650077093, 00:14:28.722 "io_failed": 0, 00:14:28.722 "io_timeout": 0, 00:14:28.722 "avg_latency_us": 31668.123636363638, 00:14:28.722 "min_latency_us": 8936.727272727272, 00:14:28.722 "max_latency_us": 21805.614545454544 00:14:28.722 } 00:14:28.722 ], 00:14:28.722 "core_count": 1 00:14:28.722 } 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:28.722 nvmf_trace.0 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72494 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72494 ']' 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72494 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72494 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72494' 00:14:28.722 killing process with pid 72494 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72494 00:14:28.722 Received shutdown signal, test time was about 1.000000 seconds 00:14:28.722 00:14:28.722 Latency(us) 00:14:28.722 [2024-11-04T10:04:00.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.722 [2024-11-04T10:04:00.892Z] =================================================================================================================== 00:14:28.722 [2024-11-04T10:04:00.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:28.722 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72494 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.981 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.981 rmmod nvme_tcp 00:14:29.239 rmmod nvme_fabrics 00:14:29.239 rmmod nvme_keyring 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72462 ']' 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72462 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72462 ']' 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72462 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72462 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72462' 00:14:29.239 killing process with pid 72462 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72462 00:14:29.239 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72462 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.497 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.UoQnZCm3Md /tmp/tmp.d6edxxLE26 /tmp/tmp.DB2DddFYtE 00:14:29.756 ************************************ 00:14:29.756 END TEST nvmf_tls 00:14:29.756 ************************************ 00:14:29.756 00:14:29.756 real 1m27.376s 00:14:29.756 user 2m22.200s 00:14:29.756 sys 0m27.635s 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:29.756 ************************************ 00:14:29.756 START TEST nvmf_fips 00:14:29.756 ************************************ 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:29.756 * Looking for test storage... 00:14:29.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:14:29.756 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:30.015 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.016 --rc genhtml_branch_coverage=1 00:14:30.016 --rc genhtml_function_coverage=1 00:14:30.016 --rc genhtml_legend=1 00:14:30.016 --rc geninfo_all_blocks=1 00:14:30.016 --rc geninfo_unexecuted_blocks=1 00:14:30.016 00:14:30.016 ' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.016 --rc genhtml_branch_coverage=1 00:14:30.016 --rc genhtml_function_coverage=1 00:14:30.016 --rc genhtml_legend=1 00:14:30.016 --rc geninfo_all_blocks=1 00:14:30.016 --rc geninfo_unexecuted_blocks=1 00:14:30.016 00:14:30.016 ' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.016 --rc genhtml_branch_coverage=1 00:14:30.016 --rc genhtml_function_coverage=1 00:14:30.016 --rc genhtml_legend=1 00:14:30.016 --rc geninfo_all_blocks=1 00:14:30.016 --rc geninfo_unexecuted_blocks=1 00:14:30.016 00:14:30.016 ' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:30.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.016 --rc genhtml_branch_coverage=1 00:14:30.016 --rc genhtml_function_coverage=1 00:14:30.016 --rc genhtml_legend=1 00:14:30.016 --rc geninfo_all_blocks=1 00:14:30.016 --rc geninfo_unexecuted_blocks=1 00:14:30.016 00:14:30.016 ' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:30.016 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:30.016 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:30.016 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:30.016 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:30.016 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.016 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.016 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:30.017 Error setting digest 00:14:30.017 40A24FE1FB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:30.017 40A24FE1FB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:30.017 Cannot find device "nvmf_init_br" 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:30.017 Cannot find device "nvmf_init_br2" 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:30.017 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:30.275 Cannot find device "nvmf_tgt_br" 00:14:30.275 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:30.275 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.275 Cannot find device "nvmf_tgt_br2" 00:14:30.275 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:30.275 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:30.275 Cannot find device "nvmf_init_br" 00:14:30.275 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:30.275 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:30.275 Cannot find device "nvmf_init_br2" 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:30.276 Cannot find device "nvmf_tgt_br" 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:30.276 Cannot find device "nvmf_tgt_br2" 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:30.276 Cannot find device "nvmf_br" 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:30.276 Cannot find device "nvmf_init_if" 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:30.276 Cannot find device "nvmf_init_if2" 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:30.276 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:30.535 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:30.535 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:14:30.535 00:14:30.535 --- 10.0.0.3 ping statistics --- 00:14:30.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.535 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:30.535 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:30.535 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:14:30.535 00:14:30.535 --- 10.0.0.4 ping statistics --- 00:14:30.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.535 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:30.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:30.535 00:14:30.535 --- 10.0.0.1 ping statistics --- 00:14:30.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.535 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:30.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:30.535 00:14:30.535 --- 10.0.0.2 ping statistics --- 00:14:30.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.535 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72825 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72825 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72825 ']' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.535 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:30.535 [2024-11-04 10:04:02.639215] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:30.535 [2024-11-04 10:04:02.639334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.793 [2024-11-04 10:04:02.795096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.793 [2024-11-04 10:04:02.854398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.793 [2024-11-04 10:04:02.854479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.793 [2024-11-04 10:04:02.854505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.793 [2024-11-04 10:04:02.854516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.793 [2024-11-04 10:04:02.854525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.793 [2024-11-04 10:04:02.855001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.793 [2024-11-04 10:04:02.912848] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.SZh 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.SZh 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.SZh 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.SZh 00:14:31.728 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.986 [2024-11-04 10:04:03.947790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.986 [2024-11-04 10:04:03.963744] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:31.986 [2024-11-04 10:04:03.963972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:31.986 malloc0 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72861 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72861 /var/tmp/bdevperf.sock 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72861 ']' 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.986 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.986 [2024-11-04 10:04:04.113671] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:31.986 [2024-11-04 10:04:04.113785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72861 ] 00:14:32.245 [2024-11-04 10:04:04.264109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.245 [2024-11-04 10:04:04.329080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.245 [2024-11-04 10:04:04.387906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.179 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:33.179 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:33.179 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.SZh 00:14:33.179 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:33.438 [2024-11-04 10:04:05.548722] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.696 TLSTESTn1 00:14:33.696 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.696 Running I/O for 10 seconds... 00:14:35.593 4204.00 IOPS, 16.42 MiB/s [2024-11-04T10:04:09.140Z] 4119.50 IOPS, 16.09 MiB/s [2024-11-04T10:04:10.081Z] 4015.00 IOPS, 15.68 MiB/s [2024-11-04T10:04:11.017Z] 4009.00 IOPS, 15.66 MiB/s [2024-11-04T10:04:11.954Z] 3974.00 IOPS, 15.52 MiB/s [2024-11-04T10:04:12.890Z] 3948.33 IOPS, 15.42 MiB/s [2024-11-04T10:04:13.835Z] 3962.00 IOPS, 15.48 MiB/s [2024-11-04T10:04:14.784Z] 3954.88 IOPS, 15.45 MiB/s [2024-11-04T10:04:16.160Z] 3948.22 IOPS, 15.42 MiB/s [2024-11-04T10:04:16.160Z] 3944.20 IOPS, 15.41 MiB/s 00:14:43.990 Latency(us) 00:14:43.990 [2024-11-04T10:04:16.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.990 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:43.990 Verification LBA range: start 0x0 length 0x2000 00:14:43.990 TLSTESTn1 : 10.02 3949.78 15.43 0.00 0.00 32345.27 6642.97 38130.04 00:14:43.990 [2024-11-04T10:04:16.160Z] =================================================================================================================== 00:14:43.990 [2024-11-04T10:04:16.160Z] Total : 3949.78 15.43 0.00 0.00 32345.27 6642.97 38130.04 00:14:43.990 { 00:14:43.990 "results": [ 00:14:43.990 { 00:14:43.990 "job": "TLSTESTn1", 00:14:43.990 "core_mask": "0x4", 00:14:43.990 "workload": "verify", 00:14:43.990 "status": "finished", 00:14:43.990 "verify_range": { 00:14:43.990 "start": 0, 00:14:43.990 "length": 8192 00:14:43.990 }, 00:14:43.990 "queue_depth": 128, 00:14:43.990 "io_size": 4096, 00:14:43.990 "runtime": 10.017274, 00:14:43.990 "iops": 3949.7771549425524, 00:14:43.990 "mibps": 15.428817011494345, 00:14:43.990 "io_failed": 0, 00:14:43.990 "io_timeout": 0, 00:14:43.990 "avg_latency_us": 32345.271372574247, 00:14:43.990 "min_latency_us": 6642.967272727273, 00:14:43.990 "max_latency_us": 38130.03636363636 00:14:43.990 } 00:14:43.990 ], 00:14:43.990 "core_count": 1 00:14:43.990 } 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:43.990 nvmf_trace.0 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72861 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72861 ']' 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72861 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72861 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:43.990 killing process with pid 72861 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72861' 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72861 00:14:43.990 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.990 00:14:43.990 Latency(us) 00:14:43.990 [2024-11-04T10:04:16.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.990 [2024-11-04T10:04:16.160Z] =================================================================================================================== 00:14:43.990 [2024-11-04T10:04:16.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.990 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72861 00:14:43.990 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:43.990 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:43.990 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.250 rmmod nvme_tcp 00:14:44.250 rmmod nvme_fabrics 00:14:44.250 rmmod nvme_keyring 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72825 ']' 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72825 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72825 ']' 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72825 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72825 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:44.250 killing process with pid 72825 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72825' 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72825 00:14:44.250 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72825 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.509 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.SZh 00:14:44.768 00:14:44.768 real 0m14.980s 00:14:44.768 user 0m20.828s 00:14:44.768 sys 0m5.768s 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:44.768 ************************************ 00:14:44.768 END TEST nvmf_fips 00:14:44.768 ************************************ 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.768 ************************************ 00:14:44.768 START TEST nvmf_control_msg_list 00:14:44.768 ************************************ 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:44.768 * Looking for test storage... 00:14:44.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:14:44.768 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.028 --rc genhtml_branch_coverage=1 00:14:45.028 --rc genhtml_function_coverage=1 00:14:45.028 --rc genhtml_legend=1 00:14:45.028 --rc geninfo_all_blocks=1 00:14:45.028 --rc geninfo_unexecuted_blocks=1 00:14:45.028 00:14:45.028 ' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.028 --rc genhtml_branch_coverage=1 00:14:45.028 --rc genhtml_function_coverage=1 00:14:45.028 --rc genhtml_legend=1 00:14:45.028 --rc geninfo_all_blocks=1 00:14:45.028 --rc geninfo_unexecuted_blocks=1 00:14:45.028 00:14:45.028 ' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.028 --rc genhtml_branch_coverage=1 00:14:45.028 --rc genhtml_function_coverage=1 00:14:45.028 --rc genhtml_legend=1 00:14:45.028 --rc geninfo_all_blocks=1 00:14:45.028 --rc geninfo_unexecuted_blocks=1 00:14:45.028 00:14:45.028 ' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:45.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.028 --rc genhtml_branch_coverage=1 00:14:45.028 --rc genhtml_function_coverage=1 00:14:45.028 --rc genhtml_legend=1 00:14:45.028 --rc geninfo_all_blocks=1 00:14:45.028 --rc geninfo_unexecuted_blocks=1 00:14:45.028 00:14:45.028 ' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.028 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.028 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.028 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.028 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.028 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.028 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:45.029 Cannot find device "nvmf_init_br" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:45.029 Cannot find device "nvmf_init_br2" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:45.029 Cannot find device "nvmf_tgt_br" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.029 Cannot find device "nvmf_tgt_br2" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:45.029 Cannot find device "nvmf_init_br" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.029 Cannot find device "nvmf_init_br2" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.029 Cannot find device "nvmf_tgt_br" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:45.029 Cannot find device "nvmf_tgt_br2" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:45.029 Cannot find device "nvmf_br" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:45.029 Cannot find device "nvmf_init_if" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:45.029 Cannot find device "nvmf_init_if2" 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.029 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:45.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:45.289 00:14:45.289 --- 10.0.0.3 ping statistics --- 00:14:45.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.289 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:45.289 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:45.289 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:14:45.289 00:14:45.289 --- 10.0.0.4 ping statistics --- 00:14:45.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.289 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:45.289 00:14:45.289 --- 10.0.0.1 ping statistics --- 00:14:45.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.289 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:45.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:45.289 00:14:45.289 --- 10.0.0.2 ping statistics --- 00:14:45.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.289 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73258 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73258 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 73258 ']' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:45.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:45.289 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.289 [2024-11-04 10:04:17.449991] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:45.289 [2024-11-04 10:04:17.450085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.554 [2024-11-04 10:04:17.602118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.554 [2024-11-04 10:04:17.664084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.554 [2024-11-04 10:04:17.664178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.554 [2024-11-04 10:04:17.664205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.554 [2024-11-04 10:04:17.664216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.554 [2024-11-04 10:04:17.664235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.554 [2024-11-04 10:04:17.664726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.824 [2024-11-04 10:04:17.722648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.390 [2024-11-04 10:04:18.512001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.390 Malloc0 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.390 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:46.390 [2024-11-04 10:04:18.560123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73290 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73291 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73292 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:46.649 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73290 00:14:46.649 [2024-11-04 10:04:18.750683] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:46.649 [2024-11-04 10:04:18.750900] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:46.649 [2024-11-04 10:04:18.760765] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:48.025 Initializing NVMe Controllers 00:14:48.025 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:48.025 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:48.025 Initialization complete. Launching workers. 00:14:48.025 ======================================================== 00:14:48.025 Latency(us) 00:14:48.025 Device Information : IOPS MiB/s Average min max 00:14:48.025 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3319.00 12.96 300.96 209.12 807.53 00:14:48.025 ======================================================== 00:14:48.025 Total : 3319.00 12.96 300.96 209.12 807.53 00:14:48.025 00:14:48.025 Initializing NVMe Controllers 00:14:48.025 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:48.025 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:48.025 Initialization complete. Launching workers. 00:14:48.025 ======================================================== 00:14:48.025 Latency(us) 00:14:48.025 Device Information : IOPS MiB/s Average min max 00:14:48.025 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3319.00 12.96 300.88 206.27 808.36 00:14:48.025 ======================================================== 00:14:48.025 Total : 3319.00 12.96 300.88 206.27 808.36 00:14:48.025 00:14:48.025 Initializing NVMe Controllers 00:14:48.025 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:48.025 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:48.025 Initialization complete. Launching workers. 00:14:48.025 ======================================================== 00:14:48.025 Latency(us) 00:14:48.025 Device Information : IOPS MiB/s Average min max 00:14:48.025 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3341.00 13.05 298.93 121.78 806.71 00:14:48.025 ======================================================== 00:14:48.025 Total : 3341.00 13.05 298.93 121.78 806.71 00:14:48.025 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73291 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73292 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.025 rmmod nvme_tcp 00:14:48.025 rmmod nvme_fabrics 00:14:48.025 rmmod nvme_keyring 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73258 ']' 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73258 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 73258 ']' 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 73258 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73258 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:48.025 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73258' 00:14:48.025 killing process with pid 73258 00:14:48.026 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 73258 00:14:48.026 10:04:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 73258 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:48.026 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:48.284 00:14:48.284 real 0m3.650s 00:14:48.284 user 0m5.747s 00:14:48.284 sys 0m1.362s 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:48.284 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:48.284 ************************************ 00:14:48.284 END TEST nvmf_control_msg_list 00:14:48.284 ************************************ 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.544 ************************************ 00:14:48.544 START TEST nvmf_wait_for_buf 00:14:48.544 ************************************ 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:48.544 * Looking for test storage... 00:14:48.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:48.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.544 --rc genhtml_branch_coverage=1 00:14:48.544 --rc genhtml_function_coverage=1 00:14:48.544 --rc genhtml_legend=1 00:14:48.544 --rc geninfo_all_blocks=1 00:14:48.544 --rc geninfo_unexecuted_blocks=1 00:14:48.544 00:14:48.544 ' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:48.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.544 --rc genhtml_branch_coverage=1 00:14:48.544 --rc genhtml_function_coverage=1 00:14:48.544 --rc genhtml_legend=1 00:14:48.544 --rc geninfo_all_blocks=1 00:14:48.544 --rc geninfo_unexecuted_blocks=1 00:14:48.544 00:14:48.544 ' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:48.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.544 --rc genhtml_branch_coverage=1 00:14:48.544 --rc genhtml_function_coverage=1 00:14:48.544 --rc genhtml_legend=1 00:14:48.544 --rc geninfo_all_blocks=1 00:14:48.544 --rc geninfo_unexecuted_blocks=1 00:14:48.544 00:14:48.544 ' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:48.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.544 --rc genhtml_branch_coverage=1 00:14:48.544 --rc genhtml_function_coverage=1 00:14:48.544 --rc genhtml_legend=1 00:14:48.544 --rc geninfo_all_blocks=1 00:14:48.544 --rc geninfo_unexecuted_blocks=1 00:14:48.544 00:14:48.544 ' 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.544 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:48.545 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:48.803 Cannot find device "nvmf_init_br" 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:48.803 Cannot find device "nvmf_init_br2" 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:48.803 Cannot find device "nvmf_tgt_br" 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:48.803 Cannot find device "nvmf_tgt_br2" 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:48.803 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:48.804 Cannot find device "nvmf_init_br" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:48.804 Cannot find device "nvmf_init_br2" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:48.804 Cannot find device "nvmf_tgt_br" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:48.804 Cannot find device "nvmf_tgt_br2" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:48.804 Cannot find device "nvmf_br" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:48.804 Cannot find device "nvmf_init_if" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:48.804 Cannot find device "nvmf_init_if2" 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:48.804 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.062 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.062 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.062 10:04:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:49.062 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.062 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:14:49.062 00:14:49.062 --- 10.0.0.3 ping statistics --- 00:14:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.062 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:49.062 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:49.062 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:49.062 00:14:49.062 --- 10.0.0.4 ping statistics --- 00:14:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.062 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:49.062 00:14:49.062 --- 10.0.0.1 ping statistics --- 00:14:49.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.062 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:49.062 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:49.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:49.063 00:14:49.063 --- 10.0.0.2 ping statistics --- 00:14:49.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.063 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73536 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73536 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73536 ']' 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:49.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:49.063 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.063 [2024-11-04 10:04:21.181428] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:49.063 [2024-11-04 10:04:21.181511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.322 [2024-11-04 10:04:21.332929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.322 [2024-11-04 10:04:21.401261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.322 [2024-11-04 10:04:21.401354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.322 [2024-11-04 10:04:21.401377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.322 [2024-11-04 10:04:21.401388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.322 [2024-11-04 10:04:21.401397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.322 [2024-11-04 10:04:21.401897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.322 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:49.322 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:14:49.322 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.322 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.322 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 [2024-11-04 10:04:21.560696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 Malloc0 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 [2024-11-04 10:04:21.631003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:49.581 [2024-11-04 10:04:21.655117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.581 10:04:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:49.844 [2024-11-04 10:04:21.863816] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:51.224 Initializing NVMe Controllers 00:14:51.224 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:51.224 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:51.224 Initialization complete. Launching workers. 00:14:51.224 ======================================================== 00:14:51.224 Latency(us) 00:14:51.224 Device Information : IOPS MiB/s Average min max 00:14:51.224 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 506.46 63.31 7898.28 3466.82 8122.67 00:14:51.224 ======================================================== 00:14:51.224 Total : 506.46 63.31 7898.28 3466.82 8122.67 00:14:51.224 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:51.224 rmmod nvme_tcp 00:14:51.224 rmmod nvme_fabrics 00:14:51.224 rmmod nvme_keyring 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73536 ']' 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73536 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73536 ']' 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73536 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73536 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:51.224 killing process with pid 73536 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73536' 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73536 00:14:51.224 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73536 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:51.484 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:51.744 00:14:51.744 real 0m3.287s 00:14:51.744 user 0m2.624s 00:14:51.744 sys 0m0.826s 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 ************************************ 00:14:51.744 END TEST nvmf_wait_for_buf 00:14:51.744 ************************************ 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:51.744 ************************************ 00:14:51.744 END TEST nvmf_target_extra 00:14:51.744 ************************************ 00:14:51.744 00:14:51.744 real 5m7.060s 00:14:51.744 user 10m45.086s 00:14:51.744 sys 1m7.572s 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.744 10:04:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 10:04:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:51.744 10:04:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:51.744 10:04:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.744 10:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.744 ************************************ 00:14:51.744 START TEST nvmf_host 00:14:51.744 ************************************ 00:14:51.744 10:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:52.002 * Looking for test storage... 00:14:52.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:52.002 10:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:52.002 10:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:52.002 10:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.002 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.003 --rc genhtml_branch_coverage=1 00:14:52.003 --rc genhtml_function_coverage=1 00:14:52.003 --rc genhtml_legend=1 00:14:52.003 --rc geninfo_all_blocks=1 00:14:52.003 --rc geninfo_unexecuted_blocks=1 00:14:52.003 00:14:52.003 ' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.003 --rc genhtml_branch_coverage=1 00:14:52.003 --rc genhtml_function_coverage=1 00:14:52.003 --rc genhtml_legend=1 00:14:52.003 --rc geninfo_all_blocks=1 00:14:52.003 --rc geninfo_unexecuted_blocks=1 00:14:52.003 00:14:52.003 ' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.003 --rc genhtml_branch_coverage=1 00:14:52.003 --rc genhtml_function_coverage=1 00:14:52.003 --rc genhtml_legend=1 00:14:52.003 --rc geninfo_all_blocks=1 00:14:52.003 --rc geninfo_unexecuted_blocks=1 00:14:52.003 00:14:52.003 ' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.003 --rc genhtml_branch_coverage=1 00:14:52.003 --rc genhtml_function_coverage=1 00:14:52.003 --rc genhtml_legend=1 00:14:52.003 --rc geninfo_all_blocks=1 00:14:52.003 --rc geninfo_unexecuted_blocks=1 00:14:52.003 00:14:52.003 ' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:52.003 ************************************ 00:14:52.003 START TEST nvmf_identify 00:14:52.003 ************************************ 00:14:52.003 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:52.264 * Looking for test storage... 00:14:52.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:52.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.264 --rc genhtml_branch_coverage=1 00:14:52.264 --rc genhtml_function_coverage=1 00:14:52.264 --rc genhtml_legend=1 00:14:52.264 --rc geninfo_all_blocks=1 00:14:52.264 --rc geninfo_unexecuted_blocks=1 00:14:52.264 00:14:52.264 ' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:52.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.264 --rc genhtml_branch_coverage=1 00:14:52.264 --rc genhtml_function_coverage=1 00:14:52.264 --rc genhtml_legend=1 00:14:52.264 --rc geninfo_all_blocks=1 00:14:52.264 --rc geninfo_unexecuted_blocks=1 00:14:52.264 00:14:52.264 ' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:52.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.264 --rc genhtml_branch_coverage=1 00:14:52.264 --rc genhtml_function_coverage=1 00:14:52.264 --rc genhtml_legend=1 00:14:52.264 --rc geninfo_all_blocks=1 00:14:52.264 --rc geninfo_unexecuted_blocks=1 00:14:52.264 00:14:52.264 ' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:52.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.264 --rc genhtml_branch_coverage=1 00:14:52.264 --rc genhtml_function_coverage=1 00:14:52.264 --rc genhtml_legend=1 00:14:52.264 --rc geninfo_all_blocks=1 00:14:52.264 --rc geninfo_unexecuted_blocks=1 00:14:52.264 00:14:52.264 ' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.264 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:52.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:52.265 Cannot find device "nvmf_init_br" 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:52.265 Cannot find device "nvmf_init_br2" 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:52.265 Cannot find device "nvmf_tgt_br" 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.265 Cannot find device "nvmf_tgt_br2" 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:52.265 Cannot find device "nvmf_init_br" 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:52.265 Cannot find device "nvmf_init_br2" 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:52.265 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:52.523 Cannot find device "nvmf_tgt_br" 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:52.523 Cannot find device "nvmf_tgt_br2" 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:52.523 Cannot find device "nvmf_br" 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:52.523 Cannot find device "nvmf_init_if" 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:52.523 Cannot find device "nvmf_init_if2" 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.523 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:52.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:14:52.782 00:14:52.782 --- 10.0.0.3 ping statistics --- 00:14:52.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.782 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:52.782 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.782 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:14:52.782 00:14:52.782 --- 10.0.0.4 ping statistics --- 00:14:52.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.782 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:52.782 00:14:52.782 --- 10.0.0.1 ping statistics --- 00:14:52.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.782 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:52.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:14:52.782 00:14:52.782 --- 10.0.0.2 ping statistics --- 00:14:52.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.782 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:52.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73851 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73851 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 73851 ']' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.782 10:04:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:52.782 [2024-11-04 10:04:24.851189] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:52.782 [2024-11-04 10:04:24.852236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.043 [2024-11-04 10:04:25.011009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.043 [2024-11-04 10:04:25.085307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.043 [2024-11-04 10:04:25.085610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.043 [2024-11-04 10:04:25.085747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.043 [2024-11-04 10:04:25.085764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.043 [2024-11-04 10:04:25.085773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.043 [2024-11-04 10:04:25.087227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.043 [2024-11-04 10:04:25.087332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.043 [2024-11-04 10:04:25.087374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.043 [2024-11-04 10:04:25.087378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.044 [2024-11-04 10:04:25.145956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 [2024-11-04 10:04:25.223213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 Malloc0 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 [2024-11-04 10:04:25.329419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.303 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 [ 00:14:53.303 { 00:14:53.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.303 "subtype": "Discovery", 00:14:53.303 "listen_addresses": [ 00:14:53.303 { 00:14:53.303 "trtype": "TCP", 00:14:53.303 "adrfam": "IPv4", 00:14:53.303 "traddr": "10.0.0.3", 00:14:53.303 "trsvcid": "4420" 00:14:53.303 } 00:14:53.303 ], 00:14:53.303 "allow_any_host": true, 00:14:53.303 "hosts": [] 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.303 "subtype": "NVMe", 00:14:53.303 "listen_addresses": [ 00:14:53.303 { 00:14:53.303 "trtype": "TCP", 00:14:53.303 "adrfam": "IPv4", 00:14:53.303 "traddr": "10.0.0.3", 00:14:53.303 "trsvcid": "4420" 00:14:53.303 } 00:14:53.303 ], 00:14:53.303 "allow_any_host": true, 00:14:53.303 "hosts": [], 00:14:53.303 "serial_number": "SPDK00000000000001", 00:14:53.303 "model_number": "SPDK bdev Controller", 00:14:53.303 "max_namespaces": 32, 00:14:53.303 "min_cntlid": 1, 00:14:53.303 "max_cntlid": 65519, 00:14:53.303 "namespaces": [ 00:14:53.303 { 00:14:53.303 "nsid": 1, 00:14:53.303 "bdev_name": "Malloc0", 00:14:53.303 "name": "Malloc0", 00:14:53.303 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:53.303 "eui64": "ABCDEF0123456789", 00:14:53.303 "uuid": "1d8c9a8b-406a-434f-b836-07430bb49eae" 00:14:53.304 } 00:14:53.304 ] 00:14:53.304 } 00:14:53.304 ] 00:14:53.304 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.304 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:53.304 [2024-11-04 10:04:25.379360] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:53.304 [2024-11-04 10:04:25.379430] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73879 ] 00:14:53.566 [2024-11-04 10:04:25.542254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:53.566 [2024-11-04 10:04:25.542350] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:53.566 [2024-11-04 10:04:25.542359] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:53.566 [2024-11-04 10:04:25.542374] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:53.566 [2024-11-04 10:04:25.542386] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:53.566 [2024-11-04 10:04:25.542801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:53.566 [2024-11-04 10:04:25.542889] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd5f750 0 00:14:53.566 [2024-11-04 10:04:25.553643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:53.566 [2024-11-04 10:04:25.553669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:53.566 [2024-11-04 10:04:25.553676] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:53.566 [2024-11-04 10:04:25.553680] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:53.566 [2024-11-04 10:04:25.553721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.553729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.553733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.553750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:53.566 [2024-11-04 10:04:25.553784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.561657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.561684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.561690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.561696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.566 [2024-11-04 10:04:25.561709] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:53.566 [2024-11-04 10:04:25.561720] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:53.566 [2024-11-04 10:04:25.561728] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:53.566 [2024-11-04 10:04:25.561748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.561754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.561758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.561772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.566 [2024-11-04 10:04:25.561804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.561889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.561898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.561902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.561906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.566 [2024-11-04 10:04:25.561913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:53.566 [2024-11-04 10:04:25.561921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:53.566 [2024-11-04 10:04:25.561930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.561934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.561938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.561956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.566 [2024-11-04 10:04:25.561975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.562045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.562052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.562056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.566 [2024-11-04 10:04:25.562067] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:53.566 [2024-11-04 10:04:25.562076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.566 [2024-11-04 10:04:25.562084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.562100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.566 [2024-11-04 10:04:25.562118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.562176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.562183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.562187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.566 [2024-11-04 10:04:25.562197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.566 [2024-11-04 10:04:25.562213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.562229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.566 [2024-11-04 10:04:25.562253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.562309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.562315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.562319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.566 [2024-11-04 10:04:25.562329] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:53.566 [2024-11-04 10:04:25.562334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:53.566 [2024-11-04 10:04:25.562342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.566 [2024-11-04 10:04:25.562454] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:53.566 [2024-11-04 10:04:25.562461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.566 [2024-11-04 10:04:25.562471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.562487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.566 [2024-11-04 10:04:25.562507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.562571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.562579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.562583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562603] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.566 [2024-11-04 10:04:25.562610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.566 [2024-11-04 10:04:25.562622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.566 [2024-11-04 10:04:25.562639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.566 [2024-11-04 10:04:25.562658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.566 [2024-11-04 10:04:25.562716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.566 [2024-11-04 10:04:25.562723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.566 [2024-11-04 10:04:25.562727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.566 [2024-11-04 10:04:25.562731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.567 [2024-11-04 10:04:25.562736] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.567 [2024-11-04 10:04:25.562742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:53.567 [2024-11-04 10:04:25.562750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:53.567 [2024-11-04 10:04:25.562767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.567 [2024-11-04 10:04:25.562779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.562784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.562792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.567 [2024-11-04 10:04:25.562812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.567 [2024-11-04 10:04:25.562923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.567 [2024-11-04 10:04:25.562930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.567 [2024-11-04 10:04:25.562934] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.562938] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd5f750): datao=0, datal=4096, cccid=0 00:14:53.567 [2024-11-04 10:04:25.562944] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc3740) on tqpair(0xd5f750): expected_datao=0, payload_size=4096 00:14:53.567 [2024-11-04 10:04:25.562949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.562958] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.562963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.562973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.567 [2024-11-04 10:04:25.562979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.567 [2024-11-04 10:04:25.562983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.562987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.567 [2024-11-04 10:04:25.562997] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:53.567 [2024-11-04 10:04:25.563002] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:53.567 [2024-11-04 10:04:25.563007] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:53.567 [2024-11-04 10:04:25.563013] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:53.567 [2024-11-04 10:04:25.563018] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:53.567 [2024-11-04 10:04:25.563024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:53.567 [2024-11-04 10:04:25.563038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.567 [2024-11-04 10:04:25.563050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.567 [2024-11-04 10:04:25.563087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.567 [2024-11-04 10:04:25.563168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.567 [2024-11-04 10:04:25.563175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.567 [2024-11-04 10:04:25.563179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.567 [2024-11-04 10:04:25.563191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.567 [2024-11-04 10:04:25.563213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.567 [2024-11-04 10:04:25.563234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.567 [2024-11-04 10:04:25.563255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.567 [2024-11-04 10:04:25.563274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.567 [2024-11-04 10:04:25.563288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.567 [2024-11-04 10:04:25.563297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.567 [2024-11-04 10:04:25.563328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3740, cid 0, qid 0 00:14:53.567 [2024-11-04 10:04:25.563335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc38c0, cid 1, qid 0 00:14:53.567 [2024-11-04 10:04:25.563340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3a40, cid 2, qid 0 00:14:53.567 [2024-11-04 10:04:25.563346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.567 [2024-11-04 10:04:25.563351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3d40, cid 4, qid 0 00:14:53.567 [2024-11-04 10:04:25.563460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.567 [2024-11-04 10:04:25.563476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.567 [2024-11-04 10:04:25.563481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3d40) on tqpair=0xd5f750 00:14:53.567 [2024-11-04 10:04:25.563491] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:53.567 [2024-11-04 10:04:25.563497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:53.567 [2024-11-04 10:04:25.563509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.567 [2024-11-04 10:04:25.563542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3d40, cid 4, qid 0 00:14:53.567 [2024-11-04 10:04:25.563626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.567 [2024-11-04 10:04:25.563639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.567 [2024-11-04 10:04:25.563644] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563648] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd5f750): datao=0, datal=4096, cccid=4 00:14:53.567 [2024-11-04 10:04:25.563653] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc3d40) on tqpair(0xd5f750): expected_datao=0, payload_size=4096 00:14:53.567 [2024-11-04 10:04:25.563658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563666] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563671] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.567 [2024-11-04 10:04:25.563686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.567 [2024-11-04 10:04:25.563690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3d40) on tqpair=0xd5f750 00:14:53.567 [2024-11-04 10:04:25.563711] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:53.567 [2024-11-04 10:04:25.563747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.567 [2024-11-04 10:04:25.563792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd5f750) 00:14:53.567 [2024-11-04 10:04:25.563814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.567 [2024-11-04 10:04:25.563843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3d40, cid 4, qid 0 00:14:53.567 [2024-11-04 10:04:25.563850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3ec0, cid 5, qid 0 00:14:53.567 [2024-11-04 10:04:25.563963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.567 [2024-11-04 10:04:25.563970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.567 [2024-11-04 10:04:25.563974] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.567 [2024-11-04 10:04:25.563978] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd5f750): datao=0, datal=1024, cccid=4 00:14:53.567 [2024-11-04 10:04:25.563982] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc3d40) on tqpair(0xd5f750): expected_datao=0, payload_size=1024 00:14:53.568 [2024-11-04 10:04:25.563987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.563994] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.563998] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.568 [2024-11-04 10:04:25.564011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.568 [2024-11-04 10:04:25.564014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3ec0) on tqpair=0xd5f750 00:14:53.568 [2024-11-04 10:04:25.564036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.568 [2024-11-04 10:04:25.564044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.568 [2024-11-04 10:04:25.564048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3d40) on tqpair=0xd5f750 00:14:53.568 [2024-11-04 10:04:25.564065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd5f750) 00:14:53.568 [2024-11-04 10:04:25.564078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.568 [2024-11-04 10:04:25.564102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3d40, cid 4, qid 0 00:14:53.568 [2024-11-04 10:04:25.564190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.568 [2024-11-04 10:04:25.564197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.568 [2024-11-04 10:04:25.564201] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564205] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd5f750): datao=0, datal=3072, cccid=4 00:14:53.568 [2024-11-04 10:04:25.564209] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc3d40) on tqpair(0xd5f750): expected_datao=0, payload_size=3072 00:14:53.568 [2024-11-04 10:04:25.564214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564222] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564226] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.568 [2024-11-04 10:04:25.564241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.568 [2024-11-04 10:04:25.564244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3d40) on tqpair=0xd5f750 00:14:53.568 [2024-11-04 10:04:25.564259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd5f750) 00:14:53.568 [2024-11-04 10:04:25.564271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.568 [2024-11-04 10:04:25.564294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3d40, cid 4, qid 0 00:14:53.568 [2024-11-04 10:04:25.564377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.568 [2024-11-04 10:04:25.564384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.568 [2024-11-04 10:04:25.564388] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd5f750): datao=0, datal=8, cccid=4 00:14:53.568 [2024-11-04 10:04:25.564397] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdc3d40) on tqpair(0xd5f750): expected_datao=0, payload_size=8 00:14:53.568 [2024-11-04 10:04:25.564401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564408] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564412] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.568 [2024-11-04 10:04:25.564435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.568 [2024-11-04 10:04:25.564438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.568 [2024-11-04 10:04:25.564443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3d40) on tqpair=0xd5f750 00:14:53.568 ===================================================== 00:14:53.568 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:53.568 ===================================================== 00:14:53.568 Controller Capabilities/Features 00:14:53.568 ================================ 00:14:53.568 Vendor ID: 0000 00:14:53.568 Subsystem Vendor ID: 0000 00:14:53.568 Serial Number: .................... 00:14:53.568 Model Number: ........................................ 00:14:53.568 Firmware Version: 25.01 00:14:53.568 Recommended Arb Burst: 0 00:14:53.568 IEEE OUI Identifier: 00 00 00 00:14:53.568 Multi-path I/O 00:14:53.568 May have multiple subsystem ports: No 00:14:53.568 May have multiple controllers: No 00:14:53.568 Associated with SR-IOV VF: No 00:14:53.568 Max Data Transfer Size: 131072 00:14:53.568 Max Number of Namespaces: 0 00:14:53.568 Max Number of I/O Queues: 1024 00:14:53.568 NVMe Specification Version (VS): 1.3 00:14:53.568 NVMe Specification Version (Identify): 1.3 00:14:53.568 Maximum Queue Entries: 128 00:14:53.568 Contiguous Queues Required: Yes 00:14:53.568 Arbitration Mechanisms Supported 00:14:53.568 Weighted Round Robin: Not Supported 00:14:53.568 Vendor Specific: Not Supported 00:14:53.568 Reset Timeout: 15000 ms 00:14:53.568 Doorbell Stride: 4 bytes 00:14:53.568 NVM Subsystem Reset: Not Supported 00:14:53.568 Command Sets Supported 00:14:53.568 NVM Command Set: Supported 00:14:53.568 Boot Partition: Not Supported 00:14:53.568 Memory Page Size Minimum: 4096 bytes 00:14:53.568 Memory Page Size Maximum: 4096 bytes 00:14:53.568 Persistent Memory Region: Not Supported 00:14:53.568 Optional Asynchronous Events Supported 00:14:53.568 Namespace Attribute Notices: Not Supported 00:14:53.568 Firmware Activation Notices: Not Supported 00:14:53.568 ANA Change Notices: Not Supported 00:14:53.568 PLE Aggregate Log Change Notices: Not Supported 00:14:53.568 LBA Status Info Alert Notices: Not Supported 00:14:53.568 EGE Aggregate Log Change Notices: Not Supported 00:14:53.568 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.568 Zone Descriptor Change Notices: Not Supported 00:14:53.568 Discovery Log Change Notices: Supported 00:14:53.568 Controller Attributes 00:14:53.568 128-bit Host Identifier: Not Supported 00:14:53.568 Non-Operational Permissive Mode: Not Supported 00:14:53.568 NVM Sets: Not Supported 00:14:53.568 Read Recovery Levels: Not Supported 00:14:53.568 Endurance Groups: Not Supported 00:14:53.568 Predictable Latency Mode: Not Supported 00:14:53.568 Traffic Based Keep ALive: Not Supported 00:14:53.568 Namespace Granularity: Not Supported 00:14:53.568 SQ Associations: Not Supported 00:14:53.568 UUID List: Not Supported 00:14:53.568 Multi-Domain Subsystem: Not Supported 00:14:53.568 Fixed Capacity Management: Not Supported 00:14:53.568 Variable Capacity Management: Not Supported 00:14:53.568 Delete Endurance Group: Not Supported 00:14:53.568 Delete NVM Set: Not Supported 00:14:53.568 Extended LBA Formats Supported: Not Supported 00:14:53.568 Flexible Data Placement Supported: Not Supported 00:14:53.568 00:14:53.568 Controller Memory Buffer Support 00:14:53.568 ================================ 00:14:53.568 Supported: No 00:14:53.568 00:14:53.568 Persistent Memory Region Support 00:14:53.568 ================================ 00:14:53.568 Supported: No 00:14:53.568 00:14:53.568 Admin Command Set Attributes 00:14:53.568 ============================ 00:14:53.568 Security Send/Receive: Not Supported 00:14:53.568 Format NVM: Not Supported 00:14:53.568 Firmware Activate/Download: Not Supported 00:14:53.568 Namespace Management: Not Supported 00:14:53.568 Device Self-Test: Not Supported 00:14:53.568 Directives: Not Supported 00:14:53.568 NVMe-MI: Not Supported 00:14:53.568 Virtualization Management: Not Supported 00:14:53.568 Doorbell Buffer Config: Not Supported 00:14:53.568 Get LBA Status Capability: Not Supported 00:14:53.568 Command & Feature Lockdown Capability: Not Supported 00:14:53.568 Abort Command Limit: 1 00:14:53.568 Async Event Request Limit: 4 00:14:53.568 Number of Firmware Slots: N/A 00:14:53.568 Firmware Slot 1 Read-Only: N/A 00:14:53.568 Firmware Activation Without Reset: N/A 00:14:53.568 Multiple Update Detection Support: N/A 00:14:53.568 Firmware Update Granularity: No Information Provided 00:14:53.568 Per-Namespace SMART Log: No 00:14:53.568 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.568 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:53.568 Command Effects Log Page: Not Supported 00:14:53.568 Get Log Page Extended Data: Supported 00:14:53.568 Telemetry Log Pages: Not Supported 00:14:53.569 Persistent Event Log Pages: Not Supported 00:14:53.569 Supported Log Pages Log Page: May Support 00:14:53.569 Commands Supported & Effects Log Page: Not Supported 00:14:53.569 Feature Identifiers & Effects Log Page:May Support 00:14:53.569 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.569 Data Area 4 for Telemetry Log: Not Supported 00:14:53.569 Error Log Page Entries Supported: 128 00:14:53.569 Keep Alive: Not Supported 00:14:53.569 00:14:53.569 NVM Command Set Attributes 00:14:53.569 ========================== 00:14:53.569 Submission Queue Entry Size 00:14:53.569 Max: 1 00:14:53.569 Min: 1 00:14:53.569 Completion Queue Entry Size 00:14:53.569 Max: 1 00:14:53.569 Min: 1 00:14:53.569 Number of Namespaces: 0 00:14:53.569 Compare Command: Not Supported 00:14:53.569 Write Uncorrectable Command: Not Supported 00:14:53.569 Dataset Management Command: Not Supported 00:14:53.569 Write Zeroes Command: Not Supported 00:14:53.569 Set Features Save Field: Not Supported 00:14:53.569 Reservations: Not Supported 00:14:53.569 Timestamp: Not Supported 00:14:53.569 Copy: Not Supported 00:14:53.569 Volatile Write Cache: Not Present 00:14:53.569 Atomic Write Unit (Normal): 1 00:14:53.569 Atomic Write Unit (PFail): 1 00:14:53.569 Atomic Compare & Write Unit: 1 00:14:53.569 Fused Compare & Write: Supported 00:14:53.569 Scatter-Gather List 00:14:53.569 SGL Command Set: Supported 00:14:53.569 SGL Keyed: Supported 00:14:53.569 SGL Bit Bucket Descriptor: Not Supported 00:14:53.569 SGL Metadata Pointer: Not Supported 00:14:53.569 Oversized SGL: Not Supported 00:14:53.569 SGL Metadata Address: Not Supported 00:14:53.569 SGL Offset: Supported 00:14:53.569 Transport SGL Data Block: Not Supported 00:14:53.569 Replay Protected Memory Block: Not Supported 00:14:53.569 00:14:53.569 Firmware Slot Information 00:14:53.569 ========================= 00:14:53.569 Active slot: 0 00:14:53.569 00:14:53.569 00:14:53.569 Error Log 00:14:53.569 ========= 00:14:53.569 00:14:53.569 Active Namespaces 00:14:53.569 ================= 00:14:53.569 Discovery Log Page 00:14:53.569 ================== 00:14:53.569 Generation Counter: 2 00:14:53.569 Number of Records: 2 00:14:53.569 Record Format: 0 00:14:53.569 00:14:53.569 Discovery Log Entry 0 00:14:53.569 ---------------------- 00:14:53.569 Transport Type: 3 (TCP) 00:14:53.569 Address Family: 1 (IPv4) 00:14:53.569 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:53.569 Entry Flags: 00:14:53.569 Duplicate Returned Information: 1 00:14:53.569 Explicit Persistent Connection Support for Discovery: 1 00:14:53.569 Transport Requirements: 00:14:53.569 Secure Channel: Not Required 00:14:53.569 Port ID: 0 (0x0000) 00:14:53.569 Controller ID: 65535 (0xffff) 00:14:53.569 Admin Max SQ Size: 128 00:14:53.569 Transport Service Identifier: 4420 00:14:53.569 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:53.569 Transport Address: 10.0.0.3 00:14:53.569 Discovery Log Entry 1 00:14:53.569 ---------------------- 00:14:53.569 Transport Type: 3 (TCP) 00:14:53.569 Address Family: 1 (IPv4) 00:14:53.569 Subsystem Type: 2 (NVM Subsystem) 00:14:53.569 Entry Flags: 00:14:53.569 Duplicate Returned Information: 0 00:14:53.569 Explicit Persistent Connection Support for Discovery: 0 00:14:53.569 Transport Requirements: 00:14:53.569 Secure Channel: Not Required 00:14:53.569 Port ID: 0 (0x0000) 00:14:53.569 Controller ID: 65535 (0xffff) 00:14:53.569 Admin Max SQ Size: 128 00:14:53.569 Transport Service Identifier: 4420 00:14:53.569 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:53.569 Transport Address: 10.0.0.3 [2024-11-04 10:04:25.564537] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:53.569 [2024-11-04 10:04:25.564551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3740) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.564558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.569 [2024-11-04 10:04:25.564564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc38c0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.564569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.569 [2024-11-04 10:04:25.564574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3a40) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.564579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.569 [2024-11-04 10:04:25.564585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.564606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.569 [2024-11-04 10:04:25.564617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.569 [2024-11-04 10:04:25.564635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.569 [2024-11-04 10:04:25.564658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.569 [2024-11-04 10:04:25.564716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.569 [2024-11-04 10:04:25.564724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.569 [2024-11-04 10:04:25.564728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.564740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.569 [2024-11-04 10:04:25.564756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.569 [2024-11-04 10:04:25.564778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.569 [2024-11-04 10:04:25.564879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.569 [2024-11-04 10:04:25.564891] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.569 [2024-11-04 10:04:25.564896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.564906] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:53.569 [2024-11-04 10:04:25.564911] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:53.569 [2024-11-04 10:04:25.564922] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.564931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.569 [2024-11-04 10:04:25.564939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.569 [2024-11-04 10:04:25.564957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.569 [2024-11-04 10:04:25.565029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.569 [2024-11-04 10:04:25.565036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.569 [2024-11-04 10:04:25.565040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.565055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.569 [2024-11-04 10:04:25.565072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.569 [2024-11-04 10:04:25.565088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.569 [2024-11-04 10:04:25.565138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.569 [2024-11-04 10:04:25.565145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.569 [2024-11-04 10:04:25.565149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.565163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.569 [2024-11-04 10:04:25.565179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.569 [2024-11-04 10:04:25.565195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.569 [2024-11-04 10:04:25.565257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.569 [2024-11-04 10:04:25.565272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.569 [2024-11-04 10:04:25.565276] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.569 [2024-11-04 10:04:25.565280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.569 [2024-11-04 10:04:25.565290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.570 [2024-11-04 10:04:25.565307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.570 [2024-11-04 10:04:25.565323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.570 [2024-11-04 10:04:25.565380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.570 [2024-11-04 10:04:25.565386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.570 [2024-11-04 10:04:25.565390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.570 [2024-11-04 10:04:25.565405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.570 [2024-11-04 10:04:25.565421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.570 [2024-11-04 10:04:25.565437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.570 [2024-11-04 10:04:25.565499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.570 [2024-11-04 10:04:25.565506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.570 [2024-11-04 10:04:25.565510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565514] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.570 [2024-11-04 10:04:25.565524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.565533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.570 [2024-11-04 10:04:25.565540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.570 [2024-11-04 10:04:25.565557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.570 [2024-11-04 10:04:25.569614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.570 [2024-11-04 10:04:25.569638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.570 [2024-11-04 10:04:25.569644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.569649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.570 [2024-11-04 10:04:25.569664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.569669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.569673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd5f750) 00:14:53.570 [2024-11-04 10:04:25.569684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.570 [2024-11-04 10:04:25.569712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdc3bc0, cid 3, qid 0 00:14:53.570 [2024-11-04 10:04:25.569789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.570 [2024-11-04 10:04:25.569797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.570 [2024-11-04 10:04:25.569800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.570 [2024-11-04 10:04:25.569805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdc3bc0) on tqpair=0xd5f750 00:14:53.570 [2024-11-04 10:04:25.569814] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:14:53.570 00:14:53.570 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:53.570 [2024-11-04 10:04:25.608555] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:53.570 [2024-11-04 10:04:25.608623] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73885 ] 00:14:53.834 [2024-11-04 10:04:25.772404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:53.834 [2024-11-04 10:04:25.772498] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:53.834 [2024-11-04 10:04:25.772505] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:53.834 [2024-11-04 10:04:25.772519] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:53.834 [2024-11-04 10:04:25.772530] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:53.834 [2024-11-04 10:04:25.772954] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:53.834 [2024-11-04 10:04:25.773030] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ab4750 0 00:14:53.834 [2024-11-04 10:04:25.782691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:53.834 [2024-11-04 10:04:25.782740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:53.834 [2024-11-04 10:04:25.782747] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:53.834 [2024-11-04 10:04:25.782751] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:53.834 [2024-11-04 10:04:25.782787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.782794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.782799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.834 [2024-11-04 10:04:25.782824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:53.834 [2024-11-04 10:04:25.782859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.834 [2024-11-04 10:04:25.790658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.834 [2024-11-04 10:04:25.790686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.834 [2024-11-04 10:04:25.790692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.790698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.834 [2024-11-04 10:04:25.790712] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:53.834 [2024-11-04 10:04:25.790722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:53.834 [2024-11-04 10:04:25.790729] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:53.834 [2024-11-04 10:04:25.790750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.790756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.790770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.834 [2024-11-04 10:04:25.790783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.834 [2024-11-04 10:04:25.790827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.834 [2024-11-04 10:04:25.790898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.834 [2024-11-04 10:04:25.790905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.834 [2024-11-04 10:04:25.790909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.790914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.834 [2024-11-04 10:04:25.790920] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:53.834 [2024-11-04 10:04:25.790928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:53.834 [2024-11-04 10:04:25.790937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.790941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.790945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.834 [2024-11-04 10:04:25.790953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.834 [2024-11-04 10:04:25.790974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.834 [2024-11-04 10:04:25.791306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.834 [2024-11-04 10:04:25.791322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.834 [2024-11-04 10:04:25.791327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.791331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.834 [2024-11-04 10:04:25.791338] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:53.834 [2024-11-04 10:04:25.791348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.834 [2024-11-04 10:04:25.791356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.791361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.791365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.834 [2024-11-04 10:04:25.791373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.834 [2024-11-04 10:04:25.791393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.834 [2024-11-04 10:04:25.791456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.834 [2024-11-04 10:04:25.791463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.834 [2024-11-04 10:04:25.791467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.791471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.834 [2024-11-04 10:04:25.791478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.834 [2024-11-04 10:04:25.791489] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.791493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.791497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.834 [2024-11-04 10:04:25.791505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.834 [2024-11-04 10:04:25.791524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.834 [2024-11-04 10:04:25.791985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.834 [2024-11-04 10:04:25.792005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.834 [2024-11-04 10:04:25.792010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.792014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.834 [2024-11-04 10:04:25.792020] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:53.834 [2024-11-04 10:04:25.792026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:53.834 [2024-11-04 10:04:25.792036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.834 [2024-11-04 10:04:25.792148] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:53.834 [2024-11-04 10:04:25.792155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.834 [2024-11-04 10:04:25.792166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.792170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.792174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.834 [2024-11-04 10:04:25.792182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.834 [2024-11-04 10:04:25.792206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.834 [2024-11-04 10:04:25.792368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.834 [2024-11-04 10:04:25.792380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.834 [2024-11-04 10:04:25.792385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.792389] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.834 [2024-11-04 10:04:25.792395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.834 [2024-11-04 10:04:25.792406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.792411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.834 [2024-11-04 10:04:25.792415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.792423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.835 [2024-11-04 10:04:25.792441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.835 [2024-11-04 10:04:25.792752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.835 [2024-11-04 10:04:25.792761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.835 [2024-11-04 10:04:25.792766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.792770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.835 [2024-11-04 10:04:25.792775] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.835 [2024-11-04 10:04:25.792781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.792790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:53.835 [2024-11-04 10:04:25.792807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.792819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.792824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.792833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.835 [2024-11-04 10:04:25.792855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.835 [2024-11-04 10:04:25.793229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.835 [2024-11-04 10:04:25.793245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.835 [2024-11-04 10:04:25.793251] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=4096, cccid=0 00:14:53.835 [2024-11-04 10:04:25.793260] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b18740) on tqpair(0x1ab4750): expected_datao=0, payload_size=4096 00:14:53.835 [2024-11-04 10:04:25.793266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793276] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793281] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.835 [2024-11-04 10:04:25.793296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.835 [2024-11-04 10:04:25.793300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.835 [2024-11-04 10:04:25.793315] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:53.835 [2024-11-04 10:04:25.793321] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:53.835 [2024-11-04 10:04:25.793326] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:53.835 [2024-11-04 10:04:25.793331] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:53.835 [2024-11-04 10:04:25.793336] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:53.835 [2024-11-04 10:04:25.793341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.793356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.793368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.793385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.835 [2024-11-04 10:04:25.793408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.835 [2024-11-04 10:04:25.793801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.835 [2024-11-04 10:04:25.793818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.835 [2024-11-04 10:04:25.793822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.835 [2024-11-04 10:04:25.793835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.793852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.835 [2024-11-04 10:04:25.793859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.793873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.835 [2024-11-04 10:04:25.793881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.793895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.835 [2024-11-04 10:04:25.793901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.793916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.835 [2024-11-04 10:04:25.793922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.793936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.793945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.793950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.793957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.835 [2024-11-04 10:04:25.793981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18740, cid 0, qid 0 00:14:53.835 [2024-11-04 10:04:25.793989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b188c0, cid 1, qid 0 00:14:53.835 [2024-11-04 10:04:25.793994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18a40, cid 2, qid 0 00:14:53.835 [2024-11-04 10:04:25.794000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.835 [2024-11-04 10:04:25.794005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.835 [2024-11-04 10:04:25.794472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.835 [2024-11-04 10:04:25.794487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.835 [2024-11-04 10:04:25.794492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.794496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.835 [2024-11-04 10:04:25.794502] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:53.835 [2024-11-04 10:04:25.794508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.794518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.794530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.794538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.794543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.794547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.794555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.835 [2024-11-04 10:04:25.794576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.835 [2024-11-04 10:04:25.797667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.835 [2024-11-04 10:04:25.797691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.835 [2024-11-04 10:04:25.797696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.797701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.835 [2024-11-04 10:04:25.797776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.797792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.835 [2024-11-04 10:04:25.797803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.797807] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.835 [2024-11-04 10:04:25.797818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.835 [2024-11-04 10:04:25.797844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.835 [2024-11-04 10:04:25.797927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.835 [2024-11-04 10:04:25.797935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.835 [2024-11-04 10:04:25.797939] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.835 [2024-11-04 10:04:25.797943] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=4096, cccid=4 00:14:53.835 [2024-11-04 10:04:25.797948] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b18d40) on tqpair(0x1ab4750): expected_datao=0, payload_size=4096 00:14:53.836 [2024-11-04 10:04:25.797953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.797962] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.797966] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.798036] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.798039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.798062] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:53.836 [2024-11-04 10:04:25.798078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.798089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.798098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.798110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.798133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.836 [2024-11-04 10:04:25.798505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.836 [2024-11-04 10:04:25.798521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.836 [2024-11-04 10:04:25.798526] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798530] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=4096, cccid=4 00:14:53.836 [2024-11-04 10:04:25.798535] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b18d40) on tqpair(0x1ab4750): expected_datao=0, payload_size=4096 00:14:53.836 [2024-11-04 10:04:25.798547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798555] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798559] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.798575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.798578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.798643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.798657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.798674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.798687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.798709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.836 [2024-11-04 10:04:25.798951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.836 [2024-11-04 10:04:25.798969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.836 [2024-11-04 10:04:25.798974] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798978] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=4096, cccid=4 00:14:53.836 [2024-11-04 10:04:25.798983] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b18d40) on tqpair(0x1ab4750): expected_datao=0, payload_size=4096 00:14:53.836 [2024-11-04 10:04:25.798987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.798995] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799000] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.799097] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.799100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.799114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799163] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.836 [2024-11-04 10:04:25.799169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:53.836 [2024-11-04 10:04:25.799175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:53.836 [2024-11-04 10:04:25.799195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.799208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.799217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.799231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.836 [2024-11-04 10:04:25.799259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.836 [2024-11-04 10:04:25.799267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18ec0, cid 5, qid 0 00:14:53.836 [2024-11-04 10:04:25.799685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.799701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.799706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.799718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.799724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.799728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18ec0) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.799744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.799756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.799797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18ec0, cid 5, qid 0 00:14:53.836 [2024-11-04 10:04:25.799858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.799866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.799870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18ec0) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.799885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.799890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.799897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.799915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18ec0, cid 5, qid 0 00:14:53.836 [2024-11-04 10:04:25.800372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.800387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.800392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.800396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18ec0) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.800408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.800413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.800420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.800449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18ec0, cid 5, qid 0 00:14:53.836 [2024-11-04 10:04:25.800504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.836 [2024-11-04 10:04:25.800511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.836 [2024-11-04 10:04:25.800515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.800519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18ec0) on tqpair=0x1ab4750 00:14:53.836 [2024-11-04 10:04:25.800541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.800546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.800554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.836 [2024-11-04 10:04:25.800562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.836 [2024-11-04 10:04:25.800566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab4750) 00:14:53.836 [2024-11-04 10:04:25.800573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.837 [2024-11-04 10:04:25.800581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.800585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ab4750) 00:14:53.837 [2024-11-04 10:04:25.800607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.837 [2024-11-04 10:04:25.800617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.800624] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ab4750) 00:14:53.837 [2024-11-04 10:04:25.800630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.837 [2024-11-04 10:04:25.800653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18ec0, cid 5, qid 0 00:14:53.837 [2024-11-04 10:04:25.800661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18d40, cid 4, qid 0 00:14:53.837 [2024-11-04 10:04:25.800667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b19040, cid 6, qid 0 00:14:53.837 [2024-11-04 10:04:25.800672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b191c0, cid 7, qid 0 00:14:53.837 [2024-11-04 10:04:25.801181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.837 [2024-11-04 10:04:25.801197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.837 [2024-11-04 10:04:25.801202] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801206] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=8192, cccid=5 00:14:53.837 [2024-11-04 10:04:25.801211] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b18ec0) on tqpair(0x1ab4750): expected_datao=0, payload_size=8192 00:14:53.837 [2024-11-04 10:04:25.801216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801240] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.837 [2024-11-04 10:04:25.801252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.837 [2024-11-04 10:04:25.801255] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801260] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=512, cccid=4 00:14:53.837 [2024-11-04 10:04:25.801264] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b18d40) on tqpair(0x1ab4750): expected_datao=0, payload_size=512 00:14:53.837 [2024-11-04 10:04:25.801269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801276] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801279] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.837 [2024-11-04 10:04:25.801291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.837 [2024-11-04 10:04:25.801295] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801299] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=512, cccid=6 00:14:53.837 [2024-11-04 10:04:25.801303] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b19040) on tqpair(0x1ab4750): expected_datao=0, payload_size=512 00:14:53.837 [2024-11-04 10:04:25.801308] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801314] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801318] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:53.837 [2024-11-04 10:04:25.801330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:53.837 [2024-11-04 10:04:25.801334] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801337] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab4750): datao=0, datal=4096, cccid=7 00:14:53.837 [2024-11-04 10:04:25.801342] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b191c0) on tqpair(0x1ab4750): expected_datao=0, payload_size=4096 00:14:53.837 [2024-11-04 10:04:25.801346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801353] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.837 [2024-11-04 10:04:25.801372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.837 [2024-11-04 10:04:25.801376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18ec0) on tqpair=0x1ab4750 00:14:53.837 [2024-11-04 10:04:25.801397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.837 [2024-11-04 10:04:25.801404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.837 ===================================================== 00:14:53.837 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.837 ===================================================== 00:14:53.837 Controller Capabilities/Features 00:14:53.837 ================================ 00:14:53.837 Vendor ID: 8086 00:14:53.837 Subsystem Vendor ID: 8086 00:14:53.837 Serial Number: SPDK00000000000001 00:14:53.837 Model Number: SPDK bdev Controller 00:14:53.837 Firmware Version: 25.01 00:14:53.837 Recommended Arb Burst: 6 00:14:53.837 IEEE OUI Identifier: e4 d2 5c 00:14:53.837 Multi-path I/O 00:14:53.837 May have multiple subsystem ports: Yes 00:14:53.837 May have multiple controllers: Yes 00:14:53.837 Associated with SR-IOV VF: No 00:14:53.837 Max Data Transfer Size: 131072 00:14:53.837 Max Number of Namespaces: 32 00:14:53.837 Max Number of I/O Queues: 127 00:14:53.837 NVMe Specification Version (VS): 1.3 00:14:53.837 NVMe Specification Version (Identify): 1.3 00:14:53.837 Maximum Queue Entries: 128 00:14:53.837 Contiguous Queues Required: Yes 00:14:53.837 Arbitration Mechanisms Supported 00:14:53.837 Weighted Round Robin: Not Supported 00:14:53.837 Vendor Specific: Not Supported 00:14:53.837 Reset Timeout: 15000 ms 00:14:53.837 Doorbell Stride: 4 bytes 00:14:53.837 NVM Subsystem Reset: Not Supported 00:14:53.837 Command Sets Supported 00:14:53.837 NVM Command Set: Supported 00:14:53.837 Boot Partition: Not Supported 00:14:53.837 Memory Page Size Minimum: 4096 bytes 00:14:53.837 Memory Page Size Maximum: 4096 bytes 00:14:53.837 Persistent Memory Region: Not Supported 00:14:53.837 Optional Asynchronous Events Supported 00:14:53.837 Namespace Attribute Notices: Supported 00:14:53.837 Firmware Activation Notices: Not Supported 00:14:53.837 ANA Change Notices: Not Supported 00:14:53.837 PLE Aggregate Log Change Notices: Not Supported 00:14:53.837 LBA Status Info Alert Notices: Not Supported 00:14:53.837 EGE Aggregate Log Change Notices: Not Supported 00:14:53.837 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.837 Zone Descriptor Change Notices: Not Supported 00:14:53.837 Discovery Log Change Notices: Not Supported 00:14:53.837 Controller Attributes 00:14:53.837 128-bit Host Identifier: Supported 00:14:53.837 Non-Operational Permissive Mode: Not Supported 00:14:53.837 NVM Sets: Not Supported 00:14:53.837 Read Recovery Levels: Not Supported 00:14:53.837 Endurance Groups: Not Supported 00:14:53.837 Predictable Latency Mode: Not Supported 00:14:53.837 Traffic Based Keep ALive: Not Supported 00:14:53.837 Namespace Granularity: Not Supported 00:14:53.837 SQ Associations: Not Supported 00:14:53.837 UUID List: Not Supported 00:14:53.837 Multi-Domain Subsystem: Not Supported 00:14:53.837 Fixed Capacity Management: Not Supported 00:14:53.837 Variable Capacity Management: Not Supported 00:14:53.837 Delete Endurance Group: Not Supported 00:14:53.837 Delete NVM Set: Not Supported 00:14:53.837 Extended LBA Formats Supported: Not Supported 00:14:53.837 Flexible Data Placement Supported: Not Supported 00:14:53.837 00:14:53.837 Controller Memory Buffer Support 00:14:53.837 ================================ 00:14:53.837 Supported: No 00:14:53.837 00:14:53.837 Persistent Memory Region Support 00:14:53.837 ================================ 00:14:53.837 Supported: No 00:14:53.837 00:14:53.837 Admin Command Set Attributes 00:14:53.837 ============================ 00:14:53.837 Security Send/Receive: Not Supported 00:14:53.837 Format NVM: Not Supported 00:14:53.837 Firmware Activate/Download: Not Supported 00:14:53.837 Namespace Management: Not Supported 00:14:53.837 Device Self-Test: Not Supported 00:14:53.837 Directives: Not Supported 00:14:53.837 NVMe-MI: Not Supported 00:14:53.837 Virtualization Management: Not Supported 00:14:53.837 Doorbell Buffer Config: Not Supported 00:14:53.837 Get LBA Status Capability: Not Supported 00:14:53.837 Command & Feature Lockdown Capability: Not Supported 00:14:53.837 Abort Command Limit: 4 00:14:53.837 Async Event Request Limit: 4 00:14:53.837 Number of Firmware Slots: N/A 00:14:53.837 Firmware Slot 1 Read-Only: N/A 00:14:53.837 Firmware Activation Without Reset: [2024-11-04 10:04:25.801408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18d40) on tqpair=0x1ab4750 00:14:53.837 [2024-11-04 10:04:25.801425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.837 [2024-11-04 10:04:25.801432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.837 [2024-11-04 10:04:25.801436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.837 [2024-11-04 10:04:25.801440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b19040) on tqpair=0x1ab4750 00:14:53.837 [2024-11-04 10:04:25.801447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.837 [2024-11-04 10:04:25.801454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.838 [2024-11-04 10:04:25.801457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.801461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b191c0) on tqpair=0x1ab4750 00:14:53.838 N/A 00:14:53.838 Multiple Update Detection Support: N/A 00:14:53.838 Firmware Update Granularity: No Information Provided 00:14:53.838 Per-Namespace SMART Log: No 00:14:53.838 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.838 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:53.838 Command Effects Log Page: Supported 00:14:53.838 Get Log Page Extended Data: Supported 00:14:53.838 Telemetry Log Pages: Not Supported 00:14:53.838 Persistent Event Log Pages: Not Supported 00:14:53.838 Supported Log Pages Log Page: May Support 00:14:53.838 Commands Supported & Effects Log Page: Not Supported 00:14:53.838 Feature Identifiers & Effects Log Page:May Support 00:14:53.838 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.838 Data Area 4 for Telemetry Log: Not Supported 00:14:53.838 Error Log Page Entries Supported: 128 00:14:53.838 Keep Alive: Supported 00:14:53.838 Keep Alive Granularity: 10000 ms 00:14:53.838 00:14:53.838 NVM Command Set Attributes 00:14:53.838 ========================== 00:14:53.838 Submission Queue Entry Size 00:14:53.838 Max: 64 00:14:53.838 Min: 64 00:14:53.838 Completion Queue Entry Size 00:14:53.838 Max: 16 00:14:53.838 Min: 16 00:14:53.838 Number of Namespaces: 32 00:14:53.838 Compare Command: Supported 00:14:53.838 Write Uncorrectable Command: Not Supported 00:14:53.838 Dataset Management Command: Supported 00:14:53.838 Write Zeroes Command: Supported 00:14:53.838 Set Features Save Field: Not Supported 00:14:53.838 Reservations: Supported 00:14:53.838 Timestamp: Not Supported 00:14:53.838 Copy: Supported 00:14:53.838 Volatile Write Cache: Present 00:14:53.838 Atomic Write Unit (Normal): 1 00:14:53.838 Atomic Write Unit (PFail): 1 00:14:53.838 Atomic Compare & Write Unit: 1 00:14:53.838 Fused Compare & Write: Supported 00:14:53.838 Scatter-Gather List 00:14:53.838 SGL Command Set: Supported 00:14:53.838 SGL Keyed: Supported 00:14:53.838 SGL Bit Bucket Descriptor: Not Supported 00:14:53.838 SGL Metadata Pointer: Not Supported 00:14:53.838 Oversized SGL: Not Supported 00:14:53.838 SGL Metadata Address: Not Supported 00:14:53.838 SGL Offset: Supported 00:14:53.838 Transport SGL Data Block: Not Supported 00:14:53.838 Replay Protected Memory Block: Not Supported 00:14:53.838 00:14:53.838 Firmware Slot Information 00:14:53.838 ========================= 00:14:53.838 Active slot: 1 00:14:53.838 Slot 1 Firmware Revision: 25.01 00:14:53.838 00:14:53.838 00:14:53.838 Commands Supported and Effects 00:14:53.838 ============================== 00:14:53.838 Admin Commands 00:14:53.838 -------------- 00:14:53.838 Get Log Page (02h): Supported 00:14:53.838 Identify (06h): Supported 00:14:53.838 Abort (08h): Supported 00:14:53.838 Set Features (09h): Supported 00:14:53.838 Get Features (0Ah): Supported 00:14:53.838 Asynchronous Event Request (0Ch): Supported 00:14:53.838 Keep Alive (18h): Supported 00:14:53.838 I/O Commands 00:14:53.838 ------------ 00:14:53.838 Flush (00h): Supported LBA-Change 00:14:53.838 Write (01h): Supported LBA-Change 00:14:53.838 Read (02h): Supported 00:14:53.838 Compare (05h): Supported 00:14:53.838 Write Zeroes (08h): Supported LBA-Change 00:14:53.838 Dataset Management (09h): Supported LBA-Change 00:14:53.838 Copy (19h): Supported LBA-Change 00:14:53.838 00:14:53.838 Error Log 00:14:53.838 ========= 00:14:53.838 00:14:53.838 Arbitration 00:14:53.838 =========== 00:14:53.838 Arbitration Burst: 1 00:14:53.838 00:14:53.838 Power Management 00:14:53.838 ================ 00:14:53.838 Number of Power States: 1 00:14:53.838 Current Power State: Power State #0 00:14:53.838 Power State #0: 00:14:53.838 Max Power: 0.00 W 00:14:53.838 Non-Operational State: Operational 00:14:53.838 Entry Latency: Not Reported 00:14:53.838 Exit Latency: Not Reported 00:14:53.838 Relative Read Throughput: 0 00:14:53.838 Relative Read Latency: 0 00:14:53.838 Relative Write Throughput: 0 00:14:53.838 Relative Write Latency: 0 00:14:53.838 Idle Power: Not Reported 00:14:53.838 Active Power: Not Reported 00:14:53.838 Non-Operational Permissive Mode: Not Supported 00:14:53.838 00:14:53.838 Health Information 00:14:53.838 ================== 00:14:53.838 Critical Warnings: 00:14:53.838 Available Spare Space: OK 00:14:53.838 Temperature: OK 00:14:53.838 Device Reliability: OK 00:14:53.838 Read Only: No 00:14:53.838 Volatile Memory Backup: OK 00:14:53.838 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:53.838 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:53.838 Available Spare: 0% 00:14:53.838 Available Spare Threshold: 0% 00:14:53.838 Life Percentage Used:[2024-11-04 10:04:25.804664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.804685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ab4750) 00:14:53.838 [2024-11-04 10:04:25.804711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.838 [2024-11-04 10:04:25.804741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b191c0, cid 7, qid 0 00:14:53.838 [2024-11-04 10:04:25.804988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.838 [2024-11-04 10:04:25.805004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.838 [2024-11-04 10:04:25.805008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.805013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b191c0) on tqpair=0x1ab4750 00:14:53.838 [2024-11-04 10:04:25.805057] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:53.838 [2024-11-04 10:04:25.805070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18740) on tqpair=0x1ab4750 00:14:53.838 [2024-11-04 10:04:25.805077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.838 [2024-11-04 10:04:25.805083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b188c0) on tqpair=0x1ab4750 00:14:53.838 [2024-11-04 10:04:25.805088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.838 [2024-11-04 10:04:25.805093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18a40) on tqpair=0x1ab4750 00:14:53.838 [2024-11-04 10:04:25.805098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.838 [2024-11-04 10:04:25.805104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.838 [2024-11-04 10:04:25.805109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.838 [2024-11-04 10:04:25.805130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.805135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.805139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.838 [2024-11-04 10:04:25.805148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.838 [2024-11-04 10:04:25.805171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.838 [2024-11-04 10:04:25.805646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.838 [2024-11-04 10:04:25.805661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.838 [2024-11-04 10:04:25.805682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.805687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.838 [2024-11-04 10:04:25.805696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.805701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.838 [2024-11-04 10:04:25.805705] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.805713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.805739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.806170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.806199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.806220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.806230] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:53.839 [2024-11-04 10:04:25.806235] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:53.839 [2024-11-04 10:04:25.806247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.806279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.806300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.806380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.806403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.806407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.806423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.806440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.806459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.806763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.806779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.806784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.806800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.806809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.806816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.806837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.807170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.807185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.807189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.807194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.807205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.807210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.807214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.807222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.807257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.807633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.807647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.807652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.807657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.807669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.807674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.807678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.807685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.807706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.808079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.808095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.808099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.808104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.808116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.808121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.808125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.808132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.808153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.808491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.808506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.808510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.808515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.808526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.808531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.808535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.808558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.808577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.811702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.811722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.811743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.811747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.811770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.811792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.811797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab4750) 00:14:53.839 [2024-11-04 10:04:25.811806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:53.839 [2024-11-04 10:04:25.811831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b18bc0, cid 3, qid 0 00:14:53.839 [2024-11-04 10:04:25.812022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:53.839 [2024-11-04 10:04:25.812038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:53.839 [2024-11-04 10:04:25.812042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:53.839 [2024-11-04 10:04:25.812047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b18bc0) on tqpair=0x1ab4750 00:14:53.839 [2024-11-04 10:04:25.812056] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:14:53.839 0% 00:14:53.839 Data Units Read: 0 00:14:53.839 Data Units Written: 0 00:14:53.839 Host Read Commands: 0 00:14:53.839 Host Write Commands: 0 00:14:53.839 Controller Busy Time: 0 minutes 00:14:53.839 Power Cycles: 0 00:14:53.839 Power On Hours: 0 hours 00:14:53.839 Unsafe Shutdowns: 0 00:14:53.839 Unrecoverable Media Errors: 0 00:14:53.839 Lifetime Error Log Entries: 0 00:14:53.839 Warning Temperature Time: 0 minutes 00:14:53.839 Critical Temperature Time: 0 minutes 00:14:53.839 00:14:53.839 Number of Queues 00:14:53.839 ================ 00:14:53.839 Number of I/O Submission Queues: 127 00:14:53.839 Number of I/O Completion Queues: 127 00:14:53.839 00:14:53.839 Active Namespaces 00:14:53.839 ================= 00:14:53.839 Namespace ID:1 00:14:53.839 Error Recovery Timeout: Unlimited 00:14:53.839 Command Set Identifier: NVM (00h) 00:14:53.839 Deallocate: Supported 00:14:53.839 Deallocated/Unwritten Error: Not Supported 00:14:53.839 Deallocated Read Value: Unknown 00:14:53.839 Deallocate in Write Zeroes: Not Supported 00:14:53.839 Deallocated Guard Field: 0xFFFF 00:14:53.839 Flush: Supported 00:14:53.839 Reservation: Supported 00:14:53.839 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.839 Size (in LBAs): 131072 (0GiB) 00:14:53.839 Capacity (in LBAs): 131072 (0GiB) 00:14:53.839 Utilization (in LBAs): 131072 (0GiB) 00:14:53.839 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:53.839 EUI64: ABCDEF0123456789 00:14:53.839 UUID: 1d8c9a8b-406a-434f-b836-07430bb49eae 00:14:53.839 Thin Provisioning: Not Supported 00:14:53.839 Per-NS Atomic Units: Yes 00:14:53.839 Atomic Boundary Size (Normal): 0 00:14:53.839 Atomic Boundary Size (PFail): 0 00:14:53.839 Atomic Boundary Offset: 0 00:14:53.840 Maximum Single Source Range Length: 65535 00:14:53.840 Maximum Copy Length: 65535 00:14:53.840 Maximum Source Range Count: 1 00:14:53.840 NGUID/EUI64 Never Reused: No 00:14:53.840 Namespace Write Protected: No 00:14:53.840 Number of LBA Formats: 1 00:14:53.840 Current LBA Format: LBA Format #00 00:14:53.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.840 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.840 rmmod nvme_tcp 00:14:53.840 rmmod nvme_fabrics 00:14:53.840 rmmod nvme_keyring 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73851 ']' 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73851 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 73851 ']' 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 73851 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73851 00:14:53.840 killing process with pid 73851 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73851' 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 73851 00:14:53.840 10:04:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 73851 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:54.098 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:54.360 00:14:54.360 real 0m2.386s 00:14:54.360 user 0m4.815s 00:14:54.360 sys 0m0.772s 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.360 ************************************ 00:14:54.360 END TEST nvmf_identify 00:14:54.360 ************************************ 00:14:54.360 10:04:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.619 ************************************ 00:14:54.619 START TEST nvmf_perf 00:14:54.619 ************************************ 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:54.619 * Looking for test storage... 00:14:54.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.619 --rc genhtml_branch_coverage=1 00:14:54.619 --rc genhtml_function_coverage=1 00:14:54.619 --rc genhtml_legend=1 00:14:54.619 --rc geninfo_all_blocks=1 00:14:54.619 --rc geninfo_unexecuted_blocks=1 00:14:54.619 00:14:54.619 ' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.619 --rc genhtml_branch_coverage=1 00:14:54.619 --rc genhtml_function_coverage=1 00:14:54.619 --rc genhtml_legend=1 00:14:54.619 --rc geninfo_all_blocks=1 00:14:54.619 --rc geninfo_unexecuted_blocks=1 00:14:54.619 00:14:54.619 ' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.619 --rc genhtml_branch_coverage=1 00:14:54.619 --rc genhtml_function_coverage=1 00:14:54.619 --rc genhtml_legend=1 00:14:54.619 --rc geninfo_all_blocks=1 00:14:54.619 --rc geninfo_unexecuted_blocks=1 00:14:54.619 00:14:54.619 ' 00:14:54.619 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.620 --rc genhtml_branch_coverage=1 00:14:54.620 --rc genhtml_function_coverage=1 00:14:54.620 --rc genhtml_legend=1 00:14:54.620 --rc geninfo_all_blocks=1 00:14:54.620 --rc geninfo_unexecuted_blocks=1 00:14:54.620 00:14:54.620 ' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:54.620 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.621 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:54.879 Cannot find device "nvmf_init_br" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:54.879 Cannot find device "nvmf_init_br2" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:54.879 Cannot find device "nvmf_tgt_br" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.879 Cannot find device "nvmf_tgt_br2" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:54.879 Cannot find device "nvmf_init_br" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:54.879 Cannot find device "nvmf_init_br2" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:54.879 Cannot find device "nvmf_tgt_br" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:54.879 Cannot find device "nvmf_tgt_br2" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:54.879 Cannot find device "nvmf_br" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:54.879 Cannot find device "nvmf_init_if" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:54.879 Cannot find device "nvmf_init_if2" 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.879 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.880 10:04:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.880 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.880 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.880 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:54.880 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:55.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:14:55.139 00:14:55.139 --- 10.0.0.3 ping statistics --- 00:14:55.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.139 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:55.139 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:55.139 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:55.139 00:14:55.139 --- 10.0.0.4 ping statistics --- 00:14:55.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.139 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:55.139 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:14:55.139 00:14:55.139 --- 10.0.0.1 ping statistics --- 00:14:55.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.139 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:55.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:55.140 00:14:55.140 --- 10.0.0.2 ping statistics --- 00:14:55.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.140 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74106 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74106 00:14:55.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74106 ']' 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.140 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:55.140 [2024-11-04 10:04:27.307569] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:14:55.140 [2024-11-04 10:04:27.307934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.399 [2024-11-04 10:04:27.464183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.399 [2024-11-04 10:04:27.526361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.399 [2024-11-04 10:04:27.526709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.399 [2024-11-04 10:04:27.526878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.399 [2024-11-04 10:04:27.527021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.399 [2024-11-04 10:04:27.527235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.399 [2024-11-04 10:04:27.528620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.399 [2024-11-04 10:04:27.528897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.399 [2024-11-04 10:04:27.528759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.399 [2024-11-04 10:04:27.528896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.659 [2024-11-04 10:04:27.585642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:55.659 10:04:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:56.228 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:56.228 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:56.487 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:56.487 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:56.745 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:56.745 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:56.745 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:56.745 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:56.745 10:04:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.003 [2024-11-04 10:04:29.048089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.003 10:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:57.261 10:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:57.261 10:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.519 10:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:57.519 10:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:57.777 10:04:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:58.035 [2024-11-04 10:04:30.065515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.035 10:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:58.299 10:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:58.299 10:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:58.299 10:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:58.299 10:04:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:59.694 Initializing NVMe Controllers 00:14:59.694 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:59.694 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:59.694 Initialization complete. Launching workers. 00:14:59.694 ======================================================== 00:14:59.694 Latency(us) 00:14:59.694 Device Information : IOPS MiB/s Average min max 00:14:59.694 PCIE (0000:00:10.0) NSID 1 from core 0: 23742.41 92.74 1347.65 368.34 7891.77 00:14:59.694 ======================================================== 00:14:59.694 Total : 23742.41 92.74 1347.65 368.34 7891.77 00:14:59.694 00:14:59.694 10:04:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:01.071 Initializing NVMe Controllers 00:15:01.071 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:01.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:01.071 Initialization complete. Launching workers. 00:15:01.071 ======================================================== 00:15:01.071 Latency(us) 00:15:01.071 Device Information : IOPS MiB/s Average min max 00:15:01.071 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3681.12 14.38 270.24 99.05 7163.37 00:15:01.071 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.87 0.49 8071.47 4984.16 12010.69 00:15:01.071 ======================================================== 00:15:01.071 Total : 3805.99 14.87 526.19 99.05 12010.69 00:15:01.071 00:15:01.071 10:04:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:02.461 Initializing NVMe Controllers 00:15:02.461 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:02.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:02.461 Initialization complete. Launching workers. 00:15:02.461 ======================================================== 00:15:02.461 Latency(us) 00:15:02.461 Device Information : IOPS MiB/s Average min max 00:15:02.461 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8660.97 33.83 3695.09 650.20 10740.16 00:15:02.461 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3871.99 15.12 8311.23 6754.48 17288.95 00:15:02.461 ======================================================== 00:15:02.461 Total : 12532.96 48.96 5121.22 650.20 17288.95 00:15:02.461 00:15:02.461 10:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:02.461 10:04:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:04.991 Initializing NVMe Controllers 00:15:04.991 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.991 Controller IO queue size 128, less than required. 00:15:04.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.991 Controller IO queue size 128, less than required. 00:15:04.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.991 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.991 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:04.991 Initialization complete. Launching workers. 00:15:04.991 ======================================================== 00:15:04.991 Latency(us) 00:15:04.991 Device Information : IOPS MiB/s Average min max 00:15:04.991 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1747.79 436.95 74450.34 34854.78 112270.88 00:15:04.991 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 664.00 166.00 205896.15 64337.37 365947.67 00:15:04.991 ======================================================== 00:15:04.991 Total : 2411.79 602.95 110639.28 34854.78 365947.67 00:15:04.991 00:15:04.991 10:04:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:04.991 Initializing NVMe Controllers 00:15:04.991 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.991 Controller IO queue size 128, less than required. 00:15:04.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.991 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:04.991 Controller IO queue size 128, less than required. 00:15:04.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.991 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:04.991 WARNING: Some requested NVMe devices were skipped 00:15:04.991 No valid NVMe controllers or AIO or URING devices found 00:15:05.249 10:04:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:07.812 Initializing NVMe Controllers 00:15:07.812 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:07.812 Controller IO queue size 128, less than required. 00:15:07.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.812 Controller IO queue size 128, less than required. 00:15:07.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.812 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:07.812 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:07.812 Initialization complete. Launching workers. 00:15:07.812 00:15:07.812 ==================== 00:15:07.812 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:07.812 TCP transport: 00:15:07.812 polls: 10489 00:15:07.812 idle_polls: 7490 00:15:07.812 sock_completions: 2999 00:15:07.812 nvme_completions: 5717 00:15:07.812 submitted_requests: 8690 00:15:07.812 queued_requests: 1 00:15:07.812 00:15:07.812 ==================== 00:15:07.812 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:07.812 TCP transport: 00:15:07.812 polls: 10627 00:15:07.812 idle_polls: 6449 00:15:07.812 sock_completions: 4178 00:15:07.812 nvme_completions: 6003 00:15:07.812 submitted_requests: 8958 00:15:07.813 queued_requests: 1 00:15:07.813 ======================================================== 00:15:07.813 Latency(us) 00:15:07.813 Device Information : IOPS MiB/s Average min max 00:15:07.813 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1427.43 356.86 91044.73 44382.79 152962.09 00:15:07.813 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1498.86 374.71 85769.80 31906.69 138301.64 00:15:07.813 ======================================================== 00:15:07.813 Total : 2926.29 731.57 88342.89 31906.69 152962.09 00:15:07.813 00:15:07.813 10:04:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:07.813 10:04:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.092 rmmod nvme_tcp 00:15:08.092 rmmod nvme_fabrics 00:15:08.092 rmmod nvme_keyring 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74106 ']' 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74106 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74106 ']' 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74106 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74106 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:08.092 killing process with pid 74106 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74106' 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74106 00:15:08.092 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74106 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:09.029 10:04:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:09.029 00:15:09.029 real 0m14.568s 00:15:09.029 user 0m52.298s 00:15:09.029 sys 0m4.254s 00:15:09.029 ************************************ 00:15:09.029 END TEST nvmf_perf 00:15:09.029 ************************************ 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.029 ************************************ 00:15:09.029 START TEST nvmf_fio_host 00:15:09.029 ************************************ 00:15:09.029 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:09.289 * Looking for test storage... 00:15:09.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:09.289 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:09.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.290 --rc genhtml_branch_coverage=1 00:15:09.290 --rc genhtml_function_coverage=1 00:15:09.290 --rc genhtml_legend=1 00:15:09.290 --rc geninfo_all_blocks=1 00:15:09.290 --rc geninfo_unexecuted_blocks=1 00:15:09.290 00:15:09.290 ' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:09.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.290 --rc genhtml_branch_coverage=1 00:15:09.290 --rc genhtml_function_coverage=1 00:15:09.290 --rc genhtml_legend=1 00:15:09.290 --rc geninfo_all_blocks=1 00:15:09.290 --rc geninfo_unexecuted_blocks=1 00:15:09.290 00:15:09.290 ' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:09.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.290 --rc genhtml_branch_coverage=1 00:15:09.290 --rc genhtml_function_coverage=1 00:15:09.290 --rc genhtml_legend=1 00:15:09.290 --rc geninfo_all_blocks=1 00:15:09.290 --rc geninfo_unexecuted_blocks=1 00:15:09.290 00:15:09.290 ' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:09.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.290 --rc genhtml_branch_coverage=1 00:15:09.290 --rc genhtml_function_coverage=1 00:15:09.290 --rc genhtml_legend=1 00:15:09.290 --rc geninfo_all_blocks=1 00:15:09.290 --rc geninfo_unexecuted_blocks=1 00:15:09.290 00:15:09.290 ' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.290 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.291 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:09.291 Cannot find device "nvmf_init_br" 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:09.291 Cannot find device "nvmf_init_br2" 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:09.291 Cannot find device "nvmf_tgt_br" 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:09.291 Cannot find device "nvmf_tgt_br2" 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:09.291 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:09.551 Cannot find device "nvmf_init_br" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:09.551 Cannot find device "nvmf_init_br2" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:09.551 Cannot find device "nvmf_tgt_br" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:09.551 Cannot find device "nvmf_tgt_br2" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:09.551 Cannot find device "nvmf_br" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:09.551 Cannot find device "nvmf_init_if" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:09.551 Cannot find device "nvmf_init_if2" 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:09.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:09.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.551 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:09.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:15:09.811 00:15:09.811 --- 10.0.0.3 ping statistics --- 00:15:09.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.811 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:09.811 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:09.811 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:15:09.811 00:15:09.811 --- 10.0.0.4 ping statistics --- 00:15:09.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.811 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:15:09.811 00:15:09.811 --- 10.0.0.1 ping statistics --- 00:15:09.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.811 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:09.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:09.811 00:15:09.811 --- 10.0.0.2 ping statistics --- 00:15:09.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.811 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74564 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74564 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 74564 ']' 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:09.811 10:04:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:09.811 [2024-11-04 10:04:41.857300] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:15:09.811 [2024-11-04 10:04:41.857394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.070 [2024-11-04 10:04:42.009489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.070 [2024-11-04 10:04:42.080210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.070 [2024-11-04 10:04:42.080540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.070 [2024-11-04 10:04:42.080797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.070 [2024-11-04 10:04:42.081011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.070 [2024-11-04 10:04:42.081145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.070 [2024-11-04 10:04:42.082556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.070 [2024-11-04 10:04:42.082631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.070 [2024-11-04 10:04:42.082787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.070 [2024-11-04 10:04:42.082795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.070 [2024-11-04 10:04:42.166532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.006 10:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:11.006 10:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:15:11.006 10:04:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:11.006 [2024-11-04 10:04:43.128819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.006 10:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:11.006 10:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:11.006 10:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.265 10:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:11.523 Malloc1 00:15:11.523 10:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:11.780 10:04:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.038 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:12.340 [2024-11-04 10:04:44.356455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:12.340 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:15:12.598 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:12.599 10:04:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:12.872 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:12.872 fio-3.35 00:15:12.872 Starting 1 thread 00:15:15.403 00:15:15.403 test: (groupid=0, jobs=1): err= 0: pid=74647: Mon Nov 4 10:04:47 2024 00:15:15.403 read: IOPS=8029, BW=31.4MiB/s (32.9MB/s)(63.0MiB/2007msec) 00:15:15.403 slat (usec): min=2, max=261, avg= 2.56, stdev= 2.71 00:15:15.403 clat (usec): min=2063, max=15037, avg=8300.62, stdev=638.09 00:15:15.403 lat (usec): min=2106, max=15039, avg=8303.18, stdev=637.81 00:15:15.403 clat percentiles (usec): 00:15:15.403 | 1.00th=[ 7111], 5.00th=[ 7439], 10.00th=[ 7635], 20.00th=[ 7898], 00:15:15.403 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:15:15.403 | 70.00th=[ 8586], 80.00th=[ 8717], 90.00th=[ 8979], 95.00th=[ 9110], 00:15:15.403 | 99.00th=[10290], 99.50th=[10945], 99.90th=[13173], 99.95th=[13960], 00:15:15.403 | 99.99th=[15008] 00:15:15.403 bw ( KiB/s): min=30992, max=32768, per=99.96%, avg=32108.00, stdev=774.71, samples=4 00:15:15.404 iops : min= 7748, max= 8192, avg=8027.00, stdev=193.68, samples=4 00:15:15.404 write: IOPS=8002, BW=31.3MiB/s (32.8MB/s)(62.7MiB/2007msec); 0 zone resets 00:15:15.404 slat (usec): min=2, max=205, avg= 2.72, stdev= 1.90 00:15:15.404 clat (usec): min=1928, max=14678, avg=7564.50, stdev=603.53 00:15:15.404 lat (usec): min=1941, max=14681, avg=7567.21, stdev=603.39 00:15:15.404 clat percentiles (usec): 00:15:15.404 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7177], 00:15:15.404 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7635], 00:15:15.404 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8160], 95.00th=[ 8356], 00:15:15.404 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[12125], 99.95th=[13304], 00:15:15.404 | 99.99th=[14615] 00:15:15.404 bw ( KiB/s): min=31632, max=32288, per=99.94%, avg=31994.00, stdev=308.00, samples=4 00:15:15.404 iops : min= 7908, max= 8072, avg=7998.50, stdev=77.00, samples=4 00:15:15.404 lat (msec) : 2=0.01%, 4=0.16%, 10=98.85%, 20=0.99% 00:15:15.404 cpu : usr=71.39%, sys=21.88%, ctx=9, majf=0, minf=7 00:15:15.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:15.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.404 issued rwts: total=16116,16062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.404 00:15:15.404 Run status group 0 (all jobs): 00:15:15.404 READ: bw=31.4MiB/s (32.9MB/s), 31.4MiB/s-31.4MiB/s (32.9MB/s-32.9MB/s), io=63.0MiB (66.0MB), run=2007-2007msec 00:15:15.404 WRITE: bw=31.3MiB/s (32.8MB/s), 31.3MiB/s-31.3MiB/s (32.8MB/s-32.8MB/s), io=62.7MiB (65.8MB), run=2007-2007msec 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:15.404 10:04:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:15:15.404 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:15.404 fio-3.35 00:15:15.404 Starting 1 thread 00:15:17.938 00:15:17.938 test: (groupid=0, jobs=1): err= 0: pid=74690: Mon Nov 4 10:04:49 2024 00:15:17.938 read: IOPS=7650, BW=120MiB/s (125MB/s)(240MiB/2008msec) 00:15:17.938 slat (usec): min=3, max=116, avg= 3.73, stdev= 1.73 00:15:17.938 clat (usec): min=2261, max=20937, avg=9306.19, stdev=2540.91 00:15:17.938 lat (usec): min=2264, max=20941, avg=9309.92, stdev=2540.94 00:15:17.938 clat percentiles (usec): 00:15:17.938 | 1.00th=[ 4359], 5.00th=[ 5342], 10.00th=[ 6063], 20.00th=[ 7111], 00:15:17.938 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[ 9896], 00:15:17.938 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12780], 95.00th=[13829], 00:15:17.938 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16909], 99.95th=[17433], 00:15:17.938 | 99.99th=[19268] 00:15:17.938 bw ( KiB/s): min=58656, max=68768, per=51.63%, avg=63200.00, stdev=5201.81, samples=4 00:15:17.938 iops : min= 3666, max= 4298, avg=3950.00, stdev=325.11, samples=4 00:15:17.938 write: IOPS=4598, BW=71.9MiB/s (75.3MB/s)(129MiB/1798msec); 0 zone resets 00:15:17.938 slat (usec): min=35, max=358, avg=38.85, stdev= 7.99 00:15:17.938 clat (usec): min=5882, max=24603, avg=12918.49, stdev=2360.64 00:15:17.938 lat (usec): min=5919, max=24640, avg=12957.34, stdev=2360.84 00:15:17.938 clat percentiles (usec): 00:15:17.938 | 1.00th=[ 8291], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10814], 00:15:17.938 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:15:17.938 | 70.00th=[14222], 80.00th=[15008], 90.00th=[16057], 95.00th=[16909], 00:15:17.938 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20317], 99.95th=[20317], 00:15:17.938 | 99.99th=[24511] 00:15:17.938 bw ( KiB/s): min=61280, max=70848, per=89.50%, avg=65856.00, stdev=4786.83, samples=4 00:15:17.938 iops : min= 3830, max= 4428, avg=4116.00, stdev=299.18, samples=4 00:15:17.938 lat (msec) : 4=0.36%, 10=42.92%, 20=56.66%, 50=0.07% 00:15:17.938 cpu : usr=83.16%, sys=13.05%, ctx=53, majf=0, minf=14 00:15:17.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:17.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.938 issued rwts: total=15363,8269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.938 00:15:17.938 Run status group 0 (all jobs): 00:15:17.938 READ: bw=120MiB/s (125MB/s), 120MiB/s-120MiB/s (125MB/s-125MB/s), io=240MiB (252MB), run=2008-2008msec 00:15:17.938 WRITE: bw=71.9MiB/s (75.3MB/s), 71.9MiB/s-71.9MiB/s (75.3MB/s-75.3MB/s), io=129MiB (135MB), run=1798-1798msec 00:15:17.938 10:04:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:17.938 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:17.938 rmmod nvme_tcp 00:15:18.196 rmmod nvme_fabrics 00:15:18.196 rmmod nvme_keyring 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74564 ']' 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74564 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 74564 ']' 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 74564 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74564 00:15:18.196 killing process with pid 74564 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74564' 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 74564 00:15:18.196 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 74564 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.454 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:15:18.713 00:15:18.713 real 0m9.476s 00:15:18.713 user 0m37.826s 00:15:18.713 sys 0m2.501s 00:15:18.713 ************************************ 00:15:18.713 END TEST nvmf_fio_host 00:15:18.713 ************************************ 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.713 ************************************ 00:15:18.713 START TEST nvmf_failover 00:15:18.713 ************************************ 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:18.713 * Looking for test storage... 00:15:18.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.713 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.714 --rc genhtml_branch_coverage=1 00:15:18.714 --rc genhtml_function_coverage=1 00:15:18.714 --rc genhtml_legend=1 00:15:18.714 --rc geninfo_all_blocks=1 00:15:18.714 --rc geninfo_unexecuted_blocks=1 00:15:18.714 00:15:18.714 ' 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.714 --rc genhtml_branch_coverage=1 00:15:18.714 --rc genhtml_function_coverage=1 00:15:18.714 --rc genhtml_legend=1 00:15:18.714 --rc geninfo_all_blocks=1 00:15:18.714 --rc geninfo_unexecuted_blocks=1 00:15:18.714 00:15:18.714 ' 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.714 --rc genhtml_branch_coverage=1 00:15:18.714 --rc genhtml_function_coverage=1 00:15:18.714 --rc genhtml_legend=1 00:15:18.714 --rc geninfo_all_blocks=1 00:15:18.714 --rc geninfo_unexecuted_blocks=1 00:15:18.714 00:15:18.714 ' 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:18.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.714 --rc genhtml_branch_coverage=1 00:15:18.714 --rc genhtml_function_coverage=1 00:15:18.714 --rc genhtml_legend=1 00:15:18.714 --rc geninfo_all_blocks=1 00:15:18.714 --rc geninfo_unexecuted_blocks=1 00:15:18.714 00:15:18.714 ' 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.714 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:18.973 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:18.973 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:18.974 Cannot find device "nvmf_init_br" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:18.974 Cannot find device "nvmf_init_br2" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:18.974 Cannot find device "nvmf_tgt_br" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.974 Cannot find device "nvmf_tgt_br2" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:18.974 Cannot find device "nvmf_init_br" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:18.974 Cannot find device "nvmf_init_br2" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:18.974 Cannot find device "nvmf_tgt_br" 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:15:18.974 10:04:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:18.974 Cannot find device "nvmf_tgt_br2" 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:18.974 Cannot find device "nvmf_br" 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:18.974 Cannot find device "nvmf_init_if" 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:18.974 Cannot find device "nvmf_init_if2" 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:18.974 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:19.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:19.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:19.233 00:15:19.233 --- 10.0.0.3 ping statistics --- 00:15:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.233 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:19.233 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:19.233 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:15:19.233 00:15:19.233 --- 10.0.0.4 ping statistics --- 00:15:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.233 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:19.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:19.233 00:15:19.233 --- 10.0.0.1 ping statistics --- 00:15:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.233 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:19.233 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:19.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:15:19.233 00:15:19.233 --- 10.0.0.2 ping statistics --- 00:15:19.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.233 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74960 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74960 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 74960 ']' 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:19.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:19.234 10:04:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.234 [2024-11-04 10:04:51.364157] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:15:19.234 [2024-11-04 10:04:51.364261] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.492 [2024-11-04 10:04:51.518105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:19.492 [2024-11-04 10:04:51.586521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.492 [2024-11-04 10:04:51.586599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.492 [2024-11-04 10:04:51.586618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.492 [2024-11-04 10:04:51.586629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.492 [2024-11-04 10:04:51.586639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.492 [2024-11-04 10:04:51.587927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.492 [2024-11-04 10:04:51.588033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.492 [2024-11-04 10:04:51.588039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.492 [2024-11-04 10:04:51.645070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.425 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:20.682 [2024-11-04 10:04:52.732856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.682 10:04:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:20.942 Malloc0 00:15:20.942 10:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:21.200 10:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:21.791 10:04:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:22.049 [2024-11-04 10:04:53.987615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.049 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:22.307 [2024-11-04 10:04:54.247826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:22.307 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:22.566 [2024-11-04 10:04:54.512057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75019 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75019 /var/tmp/bdevperf.sock 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75019 ']' 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.566 10:04:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:23.500 10:04:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.500 10:04:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:23.500 10:04:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:24.066 NVMe0n1 00:15:24.066 10:04:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:24.324 00:15:24.324 10:04:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:24.324 10:04:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75048 00:15:24.324 10:04:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:25.258 10:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:25.517 [2024-11-04 10:04:57.560764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.517 [2024-11-04 10:04:57.560905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.560997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.561994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.562767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.518 [2024-11-04 10:04:57.563921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.563993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 [2024-11-04 10:04:57.564001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf8cf0 is same with the state(6) to be set 00:15:25.519 10:04:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:28.801 10:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:28.801 00:15:29.058 10:05:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:29.317 10:05:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:32.598 10:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:32.598 [2024-11-04 10:05:04.658048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.598 10:05:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:33.534 10:05:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:34.101 10:05:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75048 00:15:39.443 { 00:15:39.443 "results": [ 00:15:39.443 { 00:15:39.443 "job": "NVMe0n1", 00:15:39.443 "core_mask": "0x1", 00:15:39.443 "workload": "verify", 00:15:39.443 "status": "finished", 00:15:39.443 "verify_range": { 00:15:39.443 "start": 0, 00:15:39.443 "length": 16384 00:15:39.443 }, 00:15:39.443 "queue_depth": 128, 00:15:39.443 "io_size": 4096, 00:15:39.443 "runtime": 15.009257, 00:15:39.443 "iops": 8511.014236081106, 00:15:39.443 "mibps": 33.24614935969182, 00:15:39.443 "io_failed": 3157, 00:15:39.443 "io_timeout": 0, 00:15:39.443 "avg_latency_us": 14642.740935182801, 00:15:39.443 "min_latency_us": 636.7418181818182, 00:15:39.443 "max_latency_us": 19303.33090909091 00:15:39.444 } 00:15:39.444 ], 00:15:39.444 "core_count": 1 00:15:39.444 } 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75019 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75019 ']' 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75019 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75019 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:39.444 killing process with pid 75019 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75019' 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75019 00:15:39.444 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75019 00:15:39.709 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:39.709 [2024-11-04 10:04:54.590674] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:15:39.709 [2024-11-04 10:04:54.590802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75019 ] 00:15:39.709 [2024-11-04 10:04:54.739342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.709 [2024-11-04 10:04:54.803851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.709 [2024-11-04 10:04:54.856821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.709 Running I/O for 15 seconds... 00:15:39.709 6827.00 IOPS, 26.67 MiB/s [2024-11-04T10:05:11.879Z] [2024-11-04 10:04:57.564071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.709 [2024-11-04 10:04:57.564735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.709 [2024-11-04 10:04:57.564751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.564982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.564997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.710 [2024-11-04 10:04:57.565961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.710 [2024-11-04 10:04:57.565974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.565990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.566982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.566996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.567011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.567025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.567040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.567054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.567070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.567084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.567099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.711 [2024-11-04 10:04:57.567113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.711 [2024-11-04 10:04:57.567129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.712 [2024-11-04 10:04:57.567891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.567979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.567995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.568008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.568038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.568068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.568105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.568135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.712 [2024-11-04 10:04:57.568164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b76120 is same with the state(6) to be set 00:15:39.712 [2024-11-04 10:04:57.568197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.712 [2024-11-04 10:04:57.568209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.712 [2024-11-04 10:04:57.568220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:15:39.712 [2024-11-04 10:04:57.568233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568302] bdev_nvme.c:2049:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:39.712 [2024-11-04 10:04:57.568362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.712 [2024-11-04 10:04:57.568383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.712 [2024-11-04 10:04:57.568413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.712 [2024-11-04 10:04:57.568441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.712 [2024-11-04 10:04:57.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.712 [2024-11-04 10:04:57.568483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:39.713 [2024-11-04 10:04:57.572381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:39.713 [2024-11-04 10:04:57.572421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adb710 (9): Bad file descriptor 00:15:39.713 [2024-11-04 10:04:57.607621] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:39.713 7152.50 IOPS, 27.94 MiB/s [2024-11-04T10:05:11.883Z] 7797.67 IOPS, 30.46 MiB/s [2024-11-04T10:05:11.883Z] 8134.25 IOPS, 31.77 MiB/s [2024-11-04T10:05:11.883Z] [2024-11-04 10:05:01.224175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.224709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.224758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.224805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.224855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.224901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.224949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.224974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.713 [2024-11-04 10:05:01.225761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.225801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.225841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.225880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.225919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.713 [2024-11-04 10:05:01.225959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.713 [2024-11-04 10:05:01.225981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.226000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.226039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.226082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.226960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.226985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.227037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.714 [2024-11-04 10:05:01.227931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.714 [2024-11-04 10:05:01.227958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.714 [2024-11-04 10:05:01.227989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.228364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.228935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.228959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.715 [2024-11-04 10:05:01.229672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.229743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:68264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.229847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.229900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.229951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.230002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.230030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.230054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.230081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.230105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.230132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.715 [2024-11-04 10:05:01.230156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.715 [2024-11-04 10:05:01.230183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:68360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:01.230538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.230953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.230979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:01.231034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.716 [2024-11-04 10:05:01.231145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.716 [2024-11-04 10:05:01.231166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:15:39.716 [2024-11-04 10:05:01.231189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231276] bdev_nvme.c:2049:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:39.716 [2024-11-04 10:05:01.231398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.716 [2024-11-04 10:05:01.231436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.716 [2024-11-04 10:05:01.231488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.716 [2024-11-04 10:05:01.231536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.716 [2024-11-04 10:05:01.231600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:01.231629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:39.716 [2024-11-04 10:05:01.231708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adb710 (9): Bad file descriptor 00:15:39.716 [2024-11-04 10:05:01.236265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:39.716 [2024-11-04 10:05:01.271054] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:39.716 8240.80 IOPS, 32.19 MiB/s [2024-11-04T10:05:11.886Z] 8302.00 IOPS, 32.43 MiB/s [2024-11-04T10:05:11.886Z] 8381.14 IOPS, 32.74 MiB/s [2024-11-04T10:05:11.886Z] 8419.50 IOPS, 32.89 MiB/s [2024-11-04T10:05:11.886Z] 8420.89 IOPS, 32.89 MiB/s [2024-11-04T10:05:11.886Z] [2024-11-04 10:05:05.943731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:05.943854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.943900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:05.943922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.943941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:05.943956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.943972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.716 [2024-11-04 10:05:05.943986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.716 [2024-11-04 10:05:05.944378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.716 [2024-11-04 10:05:05.944394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.944840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.944869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.944908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.944945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.944975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.944992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.717 [2024-11-04 10:05:05.945424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.717 [2024-11-04 10:05:05.945719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.717 [2024-11-04 10:05:05.945733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.945982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.945999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.718 [2024-11-04 10:05:05.946880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.718 [2024-11-04 10:05:05.946911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.718 [2024-11-04 10:05:05.946927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.946941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.946957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.946972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.946988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.719 [2024-11-04 10:05:05.947668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.947969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.947985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.948000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.948016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.948046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.948060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.948079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.948104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.948122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.948136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.948152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.719 [2024-11-04 10:05:05.948166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.719 [2024-11-04 10:05:05.948181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b868c0 is same with the state(6) to be set 00:15:39.719 [2024-11-04 10:05:05.948203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.719 [2024-11-04 10:05:05.948223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.719 [2024-11-04 10:05:05.948241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7176 len:8 PRP1 0x0 PRP2 0x0 00:15:39.719 [2024-11-04 10:05:05.948255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.720 [2024-11-04 10:05:05.948337] bdev_nvme.c:2049:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:39.720 [2024-11-04 10:05:05.948413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.720 [2024-11-04 10:05:05.948436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.720 [2024-11-04 10:05:05.948453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.720 [2024-11-04 10:05:05.948479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.720 [2024-11-04 10:05:05.948494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.720 [2024-11-04 10:05:05.948509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.720 [2024-11-04 10:05:05.948524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.720 [2024-11-04 10:05:05.948538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.720 [2024-11-04 10:05:05.948552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:39.720 [2024-11-04 10:05:05.948611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adb710 (9): Bad file descriptor 00:15:39.720 [2024-11-04 10:05:05.952673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:39.720 [2024-11-04 10:05:05.986341] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:39.720 8422.40 IOPS, 32.90 MiB/s [2024-11-04T10:05:11.890Z] 8457.45 IOPS, 33.04 MiB/s [2024-11-04T10:05:11.890Z] 8490.67 IOPS, 33.17 MiB/s [2024-11-04T10:05:11.890Z] 8512.00 IOPS, 33.25 MiB/s [2024-11-04T10:05:11.890Z] 8505.43 IOPS, 33.22 MiB/s [2024-11-04T10:05:11.890Z] 8510.93 IOPS, 33.25 MiB/s 00:15:39.720 Latency(us) 00:15:39.720 [2024-11-04T10:05:11.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.720 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.720 Verification LBA range: start 0x0 length 0x4000 00:15:39.720 NVMe0n1 : 15.01 8511.01 33.25 210.34 0.00 14642.74 636.74 19303.33 00:15:39.720 [2024-11-04T10:05:11.890Z] =================================================================================================================== 00:15:39.720 [2024-11-04T10:05:11.890Z] Total : 8511.01 33.25 210.34 0.00 14642.74 636.74 19303.33 00:15:39.720 Received shutdown signal, test time was about 15.000000 seconds 00:15:39.720 00:15:39.720 Latency(us) 00:15:39.720 [2024-11-04T10:05:11.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.720 [2024-11-04T10:05:11.890Z] =================================================================================================================== 00:15:39.720 [2024-11-04T10:05:11.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75222 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75222 /var/tmp/bdevperf.sock 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75222 ']' 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:39.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:39.720 10:05:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.979 10:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.979 10:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:39.979 10:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:40.239 [2024-11-04 10:05:12.325321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:40.239 10:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:40.806 [2024-11-04 10:05:12.673731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:40.806 10:05:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:41.064 NVMe0n1 00:15:41.064 10:05:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:41.323 00:15:41.323 10:05:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:41.890 00:15:41.890 10:05:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:41.890 10:05:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:42.149 10:05:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:42.411 10:05:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:45.699 10:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:45.699 10:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:45.699 10:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75297 00:15:45.699 10:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.699 10:05:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75297 00:15:47.088 { 00:15:47.088 "results": [ 00:15:47.088 { 00:15:47.088 "job": "NVMe0n1", 00:15:47.088 "core_mask": "0x1", 00:15:47.088 "workload": "verify", 00:15:47.088 "status": "finished", 00:15:47.088 "verify_range": { 00:15:47.088 "start": 0, 00:15:47.088 "length": 16384 00:15:47.088 }, 00:15:47.088 "queue_depth": 128, 00:15:47.088 "io_size": 4096, 00:15:47.088 "runtime": 1.018168, 00:15:47.088 "iops": 6808.306684162142, 00:15:47.088 "mibps": 26.594947985008368, 00:15:47.088 "io_failed": 0, 00:15:47.088 "io_timeout": 0, 00:15:47.088 "avg_latency_us": 18723.770436972143, 00:15:47.088 "min_latency_us": 2353.338181818182, 00:15:47.088 "max_latency_us": 15252.014545454545 00:15:47.088 } 00:15:47.088 ], 00:15:47.088 "core_count": 1 00:15:47.088 } 00:15:47.088 10:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:47.088 [2024-11-04 10:05:11.725622] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:15:47.088 [2024-11-04 10:05:11.725752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75222 ] 00:15:47.088 [2024-11-04 10:05:11.872808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.088 [2024-11-04 10:05:11.939207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.088 [2024-11-04 10:05:11.992734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:47.088 [2024-11-04 10:05:14.365125] bdev_nvme.c:2049:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:47.088 [2024-11-04 10:05:14.365279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.088 [2024-11-04 10:05:14.365306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.088 [2024-11-04 10:05:14.365325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.088 [2024-11-04 10:05:14.365339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.088 [2024-11-04 10:05:14.365354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.088 [2024-11-04 10:05:14.365368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.088 [2024-11-04 10:05:14.365383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:47.088 [2024-11-04 10:05:14.365397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:47.088 [2024-11-04 10:05:14.365411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:47.088 [2024-11-04 10:05:14.365462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:47.088 [2024-11-04 10:05:14.365495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5eb710 (9): Bad file descriptor 00:15:47.088 [2024-11-04 10:05:14.371559] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:47.088 Running I/O for 1 seconds... 00:15:47.088 6804.00 IOPS, 26.58 MiB/s 00:15:47.088 Latency(us) 00:15:47.088 [2024-11-04T10:05:19.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.088 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:47.088 Verification LBA range: start 0x0 length 0x4000 00:15:47.088 NVMe0n1 : 1.02 6808.31 26.59 0.00 0.00 18723.77 2353.34 15252.01 00:15:47.088 [2024-11-04T10:05:19.258Z] =================================================================================================================== 00:15:47.088 [2024-11-04T10:05:19.258Z] Total : 6808.31 26.59 0.00 0.00 18723.77 2353.34 15252.01 00:15:47.088 10:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:47.088 10:05:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:47.088 10:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.656 10:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:47.656 10:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:47.656 10:05:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.914 10:05:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75222 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75222 ']' 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75222 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75222 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:51.245 killing process with pid 75222 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75222' 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75222 00:15:51.245 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75222 00:15:51.504 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:51.504 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.762 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:51.762 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:51.762 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:51.762 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.762 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:52.020 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.020 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:52.020 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.020 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.021 rmmod nvme_tcp 00:15:52.021 rmmod nvme_fabrics 00:15:52.021 rmmod nvme_keyring 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74960 ']' 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74960 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 74960 ']' 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 74960 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:52.021 10:05:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74960 00:15:52.021 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:52.021 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:52.021 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74960' 00:15:52.021 killing process with pid 74960 00:15:52.021 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 74960 00:15:52.021 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 74960 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.279 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:52.538 00:15:52.538 real 0m33.815s 00:15:52.538 user 2m10.563s 00:15:52.538 sys 0m5.811s 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.538 ************************************ 00:15:52.538 END TEST nvmf_failover 00:15:52.538 ************************************ 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:52.538 ************************************ 00:15:52.538 START TEST nvmf_host_discovery 00:15:52.538 ************************************ 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:52.538 * Looking for test storage... 00:15:52.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:52.538 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.798 --rc genhtml_branch_coverage=1 00:15:52.798 --rc genhtml_function_coverage=1 00:15:52.798 --rc genhtml_legend=1 00:15:52.798 --rc geninfo_all_blocks=1 00:15:52.798 --rc geninfo_unexecuted_blocks=1 00:15:52.798 00:15:52.798 ' 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.798 --rc genhtml_branch_coverage=1 00:15:52.798 --rc genhtml_function_coverage=1 00:15:52.798 --rc genhtml_legend=1 00:15:52.798 --rc geninfo_all_blocks=1 00:15:52.798 --rc geninfo_unexecuted_blocks=1 00:15:52.798 00:15:52.798 ' 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.798 --rc genhtml_branch_coverage=1 00:15:52.798 --rc genhtml_function_coverage=1 00:15:52.798 --rc genhtml_legend=1 00:15:52.798 --rc geninfo_all_blocks=1 00:15:52.798 --rc geninfo_unexecuted_blocks=1 00:15:52.798 00:15:52.798 ' 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:52.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.798 --rc genhtml_branch_coverage=1 00:15:52.798 --rc genhtml_function_coverage=1 00:15:52.798 --rc genhtml_legend=1 00:15:52.798 --rc geninfo_all_blocks=1 00:15:52.798 --rc geninfo_unexecuted_blocks=1 00:15:52.798 00:15:52.798 ' 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.798 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.799 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.799 Cannot find device "nvmf_init_br" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.799 Cannot find device "nvmf_init_br2" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.799 Cannot find device "nvmf_tgt_br" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.799 Cannot find device "nvmf_tgt_br2" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.799 Cannot find device "nvmf_init_br" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.799 Cannot find device "nvmf_init_br2" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.799 Cannot find device "nvmf_tgt_br" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.799 Cannot find device "nvmf_tgt_br2" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.799 Cannot find device "nvmf_br" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.799 Cannot find device "nvmf_init_if" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.799 Cannot find device "nvmf_init_if2" 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.799 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.799 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:52.800 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.800 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:52.800 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.800 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.800 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.800 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.058 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.058 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.058 10:05:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.058 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:53.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:15:53.059 00:15:53.059 --- 10.0.0.3 ping statistics --- 00:15:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.059 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:53.059 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:53.059 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:15:53.059 00:15:53.059 --- 10.0.0.4 ping statistics --- 00:15:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.059 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:53.059 00:15:53.059 --- 10.0.0.1 ping statistics --- 00:15:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.059 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:53.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:53.059 00:15:53.059 --- 10.0.0.2 ping statistics --- 00:15:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.059 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75616 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75616 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75616 ']' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.059 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.317 [2024-11-04 10:05:25.253968] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:15:53.317 [2024-11-04 10:05:25.254068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.317 [2024-11-04 10:05:25.399618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.317 [2024-11-04 10:05:25.461768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.317 [2024-11-04 10:05:25.461831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.317 [2024-11-04 10:05:25.461844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.317 [2024-11-04 10:05:25.461854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.317 [2024-11-04 10:05:25.461862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.317 [2024-11-04 10:05:25.462274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.576 [2024-11-04 10:05:25.516441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.576 [2024-11-04 10:05:25.637849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.576 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.577 [2024-11-04 10:05:25.650005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.577 null0 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.577 null1 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75646 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75646 /tmp/host.sock 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75646 ']' 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.577 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.577 10:05:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.577 [2024-11-04 10:05:25.741531] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:15:53.577 [2024-11-04 10:05:25.741654] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75646 ] 00:15:53.835 [2024-11-04 10:05:25.895504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.835 [2024-11-04 10:05:25.965931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.095 [2024-11-04 10:05:26.025415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.095 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.354 [2024-11-04 10:05:26.486212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.354 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:54.613 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:15:54.614 10:05:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:15:55.181 [2024-11-04 10:05:27.133219] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:55.181 [2024-11-04 10:05:27.133269] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:55.181 [2024-11-04 10:05:27.133297] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:55.181 [2024-11-04 10:05:27.139260] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:55.181 [2024-11-04 10:05:27.193728] bdev_nvme.c:5633:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:55.181 [2024-11-04 10:05:27.194892] bdev_nvme.c:1977:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8efe40:1 started. 00:15:55.181 [2024-11-04 10:05:27.196821] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:55.181 [2024-11-04 10:05:27.196851] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:55.181 [2024-11-04 10:05:27.201698] bdev_nvme.c:1783:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8efe40 was disconnected and freed. delete nvme_qpair. 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.749 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.750 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 [2024-11-04 10:05:27.965785] bdev_nvme.c:1977:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x8fdf40:1 started. 00:15:56.009 [2024-11-04 10:05:27.972796] bdev_nvme.c:1783:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x8fdf40 was disconnected and freed. delete nvme_qpair. 00:15:56.009 10:05:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 [2024-11-04 10:05:28.071741] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:56.009 [2024-11-04 10:05:28.072095] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:56.009 [2024-11-04 10:05:28.072130] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.009 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:56.010 [2024-11-04 10:05:28.078079] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:56.010 [2024-11-04 10:05:28.136544] bdev_nvme.c:5633:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:56.010 [2024-11-04 10:05:28.136619] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:56.010 [2024-11-04 10:05:28.136634] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:56.010 [2024-11-04 10:05:28.136640] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.010 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.270 [2024-11-04 10:05:28.308798] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:56.270 [2024-11-04 10:05:28.308851] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:56.270 [2024-11-04 10:05:28.314237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.270 [2024-11-04 10:05:28.314280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.270 [2024-11-04 10:05:28.314294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.270 [2024-11-04 10:05:28.314303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.270 [2024-11-04 10:05:28.314313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.270 [2024-11-04 10:05:28.314322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.270 [2024-11-04 10:05:28.314333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.270 [2024-11-04 10:05:28.314342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.270 [2024-11-04 10:05:28.314352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cc230 is same with the state(6) to be set 00:15:56.270 [2024-11-04 10:05:28.314794] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:56.270 [2024-11-04 10:05:28.314819] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:56.270 [2024-11-04 10:05:28.314880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8cc230 (9): Bad file descriptor 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.270 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.271 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.530 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.789 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:56.790 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:56.790 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:56.790 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:56.790 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:56.790 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.790 10:05:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.768 [2024-11-04 10:05:29.743978] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:57.768 [2024-11-04 10:05:29.744012] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:57.768 [2024-11-04 10:05:29.744033] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:57.768 [2024-11-04 10:05:29.750017] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:57.768 [2024-11-04 10:05:29.808367] bdev_nvme.c:5633:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:57.768 [2024-11-04 10:05:29.809380] bdev_nvme.c:1977:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x8c4c40:1 started. 00:15:57.768 [2024-11-04 10:05:29.811909] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:57.768 [2024-11-04 10:05:29.811958] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:57.768 [2024-11-04 10:05:29.813457] bdev_nvme.c:1783:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x8c4c40 was disconnected and freed. delete nvme_qpair. 00:15:57.768 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.769 request: 00:15:57.769 { 00:15:57.769 "name": "nvme", 00:15:57.769 "trtype": "tcp", 00:15:57.769 "traddr": "10.0.0.3", 00:15:57.769 "adrfam": "ipv4", 00:15:57.769 "trsvcid": "8009", 00:15:57.769 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:57.769 "wait_for_attach": true, 00:15:57.769 "method": "bdev_nvme_start_discovery", 00:15:57.769 "req_id": 1 00:15:57.769 } 00:15:57.769 Got JSON-RPC error response 00:15:57.769 response: 00:15:57.769 { 00:15:57.769 "code": -17, 00:15:57.769 "message": "File exists" 00:15:57.769 } 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:57.769 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.028 request: 00:15:58.028 { 00:15:58.028 "name": "nvme_second", 00:15:58.028 "trtype": "tcp", 00:15:58.028 "traddr": "10.0.0.3", 00:15:58.028 "adrfam": "ipv4", 00:15:58.028 "trsvcid": "8009", 00:15:58.028 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:58.028 "wait_for_attach": true, 00:15:58.028 "method": "bdev_nvme_start_discovery", 00:15:58.028 "req_id": 1 00:15:58.028 } 00:15:58.028 Got JSON-RPC error response 00:15:58.028 response: 00:15:58.028 { 00:15:58.028 "code": -17, 00:15:58.028 "message": "File exists" 00:15:58.028 } 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:58.028 10:05:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.028 10:05:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.964 [2024-11-04 10:05:31.092387] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:58.964 [2024-11-04 10:05:31.092475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c3040 with addr=10.0.0.3, port=8010 00:15:58.964 [2024-11-04 10:05:31.092502] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:58.964 [2024-11-04 10:05:31.092513] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:58.964 [2024-11-04 10:05:31.092523] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:00.342 [2024-11-04 10:05:32.092400] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:00.342 [2024-11-04 10:05:32.092513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c3040 with addr=10.0.0.3, port=8010 00:16:00.342 [2024-11-04 10:05:32.092540] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:00.342 [2024-11-04 10:05:32.092551] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:00.342 [2024-11-04 10:05:32.092560] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:01.279 [2024-11-04 10:05:33.092224] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:01.279 request: 00:16:01.279 { 00:16:01.279 "name": "nvme_second", 00:16:01.279 "trtype": "tcp", 00:16:01.279 "traddr": "10.0.0.3", 00:16:01.279 "adrfam": "ipv4", 00:16:01.279 "trsvcid": "8010", 00:16:01.279 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:01.279 "wait_for_attach": false, 00:16:01.279 "attach_timeout_ms": 3000, 00:16:01.279 "method": "bdev_nvme_start_discovery", 00:16:01.279 "req_id": 1 00:16:01.279 } 00:16:01.279 Got JSON-RPC error response 00:16:01.279 response: 00:16:01.279 { 00:16:01.279 "code": -110, 00:16:01.279 "message": "Connection timed out" 00:16:01.279 } 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:01.279 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75646 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.280 rmmod nvme_tcp 00:16:01.280 rmmod nvme_fabrics 00:16:01.280 rmmod nvme_keyring 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75616 ']' 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75616 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 75616 ']' 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 75616 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75616 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:01.280 killing process with pid 75616 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75616' 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 75616 00:16:01.280 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 75616 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:01.539 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:01.798 00:16:01.798 real 0m9.257s 00:16:01.798 user 0m17.401s 00:16:01.798 sys 0m2.112s 00:16:01.798 ************************************ 00:16:01.798 END TEST nvmf_host_discovery 00:16:01.798 ************************************ 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.798 ************************************ 00:16:01.798 START TEST nvmf_host_multipath_status 00:16:01.798 ************************************ 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:01.798 * Looking for test storage... 00:16:01.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:16:01.798 10:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.059 --rc genhtml_branch_coverage=1 00:16:02.059 --rc genhtml_function_coverage=1 00:16:02.059 --rc genhtml_legend=1 00:16:02.059 --rc geninfo_all_blocks=1 00:16:02.059 --rc geninfo_unexecuted_blocks=1 00:16:02.059 00:16:02.059 ' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.059 --rc genhtml_branch_coverage=1 00:16:02.059 --rc genhtml_function_coverage=1 00:16:02.059 --rc genhtml_legend=1 00:16:02.059 --rc geninfo_all_blocks=1 00:16:02.059 --rc geninfo_unexecuted_blocks=1 00:16:02.059 00:16:02.059 ' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.059 --rc genhtml_branch_coverage=1 00:16:02.059 --rc genhtml_function_coverage=1 00:16:02.059 --rc genhtml_legend=1 00:16:02.059 --rc geninfo_all_blocks=1 00:16:02.059 --rc geninfo_unexecuted_blocks=1 00:16:02.059 00:16:02.059 ' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.059 --rc genhtml_branch_coverage=1 00:16:02.059 --rc genhtml_function_coverage=1 00:16:02.059 --rc genhtml_legend=1 00:16:02.059 --rc geninfo_all_blocks=1 00:16:02.059 --rc geninfo_unexecuted_blocks=1 00:16:02.059 00:16:02.059 ' 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.059 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.060 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.060 Cannot find device "nvmf_init_br" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.060 Cannot find device "nvmf_init_br2" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.060 Cannot find device "nvmf_tgt_br" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.060 Cannot find device "nvmf_tgt_br2" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:02.060 Cannot find device "nvmf_init_br" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:02.060 Cannot find device "nvmf_init_br2" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:02.060 Cannot find device "nvmf_tgt_br" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:02.060 Cannot find device "nvmf_tgt_br2" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:02.060 Cannot find device "nvmf_br" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:02.060 Cannot find device "nvmf_init_if" 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:02.060 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:02.320 Cannot find device "nvmf_init_if2" 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:02.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:02.320 00:16:02.320 --- 10.0.0.3 ping statistics --- 00:16:02.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.320 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:02.320 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:02.320 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:16:02.320 00:16:02.320 --- 10.0.0.4 ping statistics --- 00:16:02.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.320 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:02.320 00:16:02.320 --- 10.0.0.1 ping statistics --- 00:16:02.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.320 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:02.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:02.320 00:16:02.320 --- 10.0.0.2 ping statistics --- 00:16:02.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.320 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.320 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76152 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76152 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76152 ']' 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:02.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:02.579 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:02.579 [2024-11-04 10:05:34.558902] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:16:02.579 [2024-11-04 10:05:34.559001] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.579 [2024-11-04 10:05:34.714018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.838 [2024-11-04 10:05:34.781643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.838 [2024-11-04 10:05:34.781718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.838 [2024-11-04 10:05:34.781732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.838 [2024-11-04 10:05:34.781742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.838 [2024-11-04 10:05:34.781751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.838 [2024-11-04 10:05:34.783099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.838 [2024-11-04 10:05:34.783114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.838 [2024-11-04 10:05:34.845009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76152 00:16:02.838 10:05:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:03.097 [2024-11-04 10:05:35.261425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.355 10:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:03.614 Malloc0 00:16:03.614 10:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:03.901 10:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.168 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:04.428 [2024-11-04 10:05:36.482570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.428 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:04.687 [2024-11-04 10:05:36.734769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76200 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76200 /var/tmp/bdevperf.sock 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76200 ']' 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:04.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:04.687 10:05:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:05.623 10:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:05.623 10:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:16:05.623 10:05:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:06.191 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:06.450 Nvme0n1 00:16:06.450 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:06.709 Nvme0n1 00:16:06.709 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:06.709 10:05:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:09.246 10:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:09.246 10:05:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:09.246 10:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:09.246 10:05:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.645 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.921 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.921 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.921 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.921 10:05:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.180 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.180 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.180 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.180 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.439 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.439 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:11.439 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.439 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.697 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.697 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:11.697 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.697 10:05:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:12.264 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.264 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:12.264 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:12.264 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:12.523 10:05:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:13.900 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:13.900 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:13.900 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.900 10:05:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:13.900 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:13.900 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.900 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.900 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.159 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.159 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.159 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.159 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.728 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.728 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.728 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.728 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.987 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.987 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.987 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.987 10:05:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:15.247 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.247 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:15.247 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.247 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.506 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.506 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:15.506 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:15.765 10:05:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:16.062 10:05:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:17.025 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:17.025 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:17.025 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.025 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.284 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.284 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:17.284 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.284 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.543 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.543 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.543 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.543 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.801 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.801 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.801 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.801 10:05:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:18.060 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.060 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:18.060 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.060 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.318 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.318 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.318 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.318 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.886 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.886 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:18.886 10:05:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:19.145 10:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:19.404 10:05:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:20.339 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:20.339 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:20.340 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.340 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.599 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.599 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:20.599 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.599 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.857 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.857 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.857 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.857 10:05:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.115 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.115 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:21.115 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.115 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:21.375 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.375 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:21.375 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.375 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.633 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.633 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:21.633 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.634 10:05:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:22.209 10:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:22.209 10:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:22.209 10:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:22.209 10:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:22.468 10:05:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.848 10:05:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:24.107 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.107 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:24.107 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.107 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:24.366 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.366 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:24.366 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.366 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.625 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.626 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:24.626 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.626 10:05:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.885 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.885 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:24.885 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.885 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:25.453 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:25.453 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:25.453 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:25.453 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:25.712 10:05:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:27.090 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:27.090 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:27.090 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.090 10:05:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.090 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.090 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:27.090 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.090 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:27.349 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.349 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:27.349 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.349 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:27.608 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.608 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:27.608 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.608 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.866 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.866 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:27.866 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.866 10:05:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:28.125 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.125 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:28.125 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.125 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:28.384 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.384 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:28.951 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:28.951 10:06:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:29.210 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:29.469 10:06:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:30.405 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:30.405 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:30.405 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:30.405 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.664 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.664 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:30.664 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.664 10:06:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.231 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.231 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:31.231 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:31.231 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.490 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.490 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:31.490 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.490 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:31.749 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.749 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:31.749 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.749 10:06:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.008 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.008 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:32.008 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:32.008 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.267 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.267 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:32.267 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:32.834 10:06:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:32.835 10:06:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:34.212 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:34.212 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:34.212 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.213 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:34.213 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.213 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:34.213 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.213 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:34.471 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.471 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:34.471 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.471 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:35.052 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.052 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:35.052 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:35.052 10:06:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.311 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.311 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:35.311 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.311 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:35.570 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.570 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:35.570 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.570 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:35.830 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.830 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:35.830 10:06:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:36.089 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:36.348 10:06:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:37.725 10:06:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.010 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.010 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.010 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.010 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:38.578 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.578 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:38.578 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.578 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:38.837 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.837 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:38.837 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.837 10:06:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.096 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.096 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.096 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.096 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:39.355 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.355 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:39.355 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:39.614 10:06:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:39.876 10:06:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.254 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.513 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:41.513 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.513 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.513 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:41.772 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.772 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:41.772 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.772 10:06:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.031 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.031 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.031 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.031 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.633 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.633 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:42.633 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.633 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76200 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76200 ']' 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76200 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76200 00:16:42.634 killing process with pid 76200 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76200' 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76200 00:16:42.634 10:06:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76200 00:16:42.906 { 00:16:42.906 "results": [ 00:16:42.906 { 00:16:42.906 "job": "Nvme0n1", 00:16:42.906 "core_mask": "0x4", 00:16:42.906 "workload": "verify", 00:16:42.906 "status": "terminated", 00:16:42.906 "verify_range": { 00:16:42.906 "start": 0, 00:16:42.906 "length": 16384 00:16:42.906 }, 00:16:42.906 "queue_depth": 128, 00:16:42.906 "io_size": 4096, 00:16:42.906 "runtime": 35.884085, 00:16:42.906 "iops": 7996.859889279607, 00:16:42.906 "mibps": 31.237733942498465, 00:16:42.906 "io_failed": 0, 00:16:42.906 "io_timeout": 0, 00:16:42.906 "avg_latency_us": 15973.672148211977, 00:16:42.906 "min_latency_us": 521.3090909090909, 00:16:42.906 "max_latency_us": 4057035.869090909 00:16:42.906 } 00:16:42.906 ], 00:16:42.906 "core_count": 1 00:16:42.906 } 00:16:42.906 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76200 00:16:42.906 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:42.906 [2024-11-04 10:05:36.815499] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:16:42.906 [2024-11-04 10:05:36.815641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76200 ] 00:16:42.906 [2024-11-04 10:05:36.964614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.906 [2024-11-04 10:05:37.035535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.906 [2024-11-04 10:05:37.097727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:42.906 Running I/O for 90 seconds... 00:16:42.906 6805.00 IOPS, 26.58 MiB/s [2024-11-04T10:06:15.076Z] 6794.50 IOPS, 26.54 MiB/s [2024-11-04T10:06:15.076Z] 6833.33 IOPS, 26.69 MiB/s [2024-11-04T10:06:15.076Z] 6789.25 IOPS, 26.52 MiB/s [2024-11-04T10:06:15.076Z] 6813.80 IOPS, 26.62 MiB/s [2024-11-04T10:06:15.076Z] 7009.83 IOPS, 27.38 MiB/s [2024-11-04T10:06:15.076Z] 7306.71 IOPS, 28.54 MiB/s [2024-11-04T10:06:15.076Z] 7526.38 IOPS, 29.40 MiB/s [2024-11-04T10:06:15.076Z] 7690.11 IOPS, 30.04 MiB/s [2024-11-04T10:06:15.076Z] 7845.80 IOPS, 30.65 MiB/s [2024-11-04T10:06:15.076Z] 7977.64 IOPS, 31.16 MiB/s [2024-11-04T10:06:15.076Z] 8090.83 IOPS, 31.60 MiB/s [2024-11-04T10:06:15.076Z] 8181.69 IOPS, 31.96 MiB/s [2024-11-04T10:06:15.076Z] 8258.71 IOPS, 32.26 MiB/s [2024-11-04T10:06:15.076Z] 8323.87 IOPS, 32.52 MiB/s [2024-11-04T10:06:15.076Z] [2024-11-04 10:05:54.336551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.336935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.336957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.337004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.337030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.337046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.337068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.337084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.337106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.906 [2024-11-04 10:05:54.337122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.906 [2024-11-04 10:05:54.337144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.337600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.337967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.337989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.907 [2024-11-04 10:05:54.338283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.907 [2024-11-04 10:05:54.338746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:42.907 [2024-11-04 10:05:54.338768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.338784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.338806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.338822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.338857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.338873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.338895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.338911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.338933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.338948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.338970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.338986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.339265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.339975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.339991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.340029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.340080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.908 [2024-11-04 10:05:54.340119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.340156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.340194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.340232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.340269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.340306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.908 [2024-11-04 10:05:54.340344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:42.908 [2024-11-04 10:05:54.340366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.340382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.340420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.340916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.340954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.340976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.340992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.341486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.341513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.342871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.909 [2024-11-04 10:05:54.342903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.342933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.342952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.342975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.342991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.909 [2024-11-04 10:05:54.343286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:42.909 [2024-11-04 10:05:54.343309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.343866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.343881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.910 [2024-11-04 10:05:54.344945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.344968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.344984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.345006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.910 [2024-11-04 10:05:54.345022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:42.910 [2024-11-04 10:05:54.345044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.345970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.345992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.346008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.346030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.346045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.346067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.346083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.346105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.346120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.358143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.358187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.911 [2024-11-04 10:05:54.358227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:42.911 [2024-11-04 10:05:54.358614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.911 [2024-11-04 10:05:54.358632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.358981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.358997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.912 [2024-11-04 10:05:54.359907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.359968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.359984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.360007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.360023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.360045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.360061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.360105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.360127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.360159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.360181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.360213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:42.912 [2024-11-04 10:05:54.360267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.912 [2024-11-04 10:05:54.360289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.360772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.360826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.360880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.360935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.360967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.360989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.361965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.361997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.362019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.364634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.364686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.364733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.913 [2024-11-04 10:05:54.364759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.364793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.364818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.364852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.364874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.364906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.364928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.364960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.364982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.365015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.365038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.365071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.913 [2024-11-04 10:05:54.365150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.913 [2024-11-04 10:05:54.365173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.365228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.365709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.365762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.365854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.365912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.365965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.365997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.366021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.366076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.366130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.366185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.366953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.366993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.367015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.367070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.367124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.367177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.914 [2024-11-04 10:05:54.367240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.367294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.367349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.367420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.367474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.367528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:42.914 [2024-11-04 10:05:54.367560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.914 [2024-11-04 10:05:54.367582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.367659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.367713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.367774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.367857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.367911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.367965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.368908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.368958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.369019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.369095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.915 [2024-11-04 10:05:54.369528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.369581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.369677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.369750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.915 [2024-11-04 10:05:54.369806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:42.915 [2024-11-04 10:05:54.369838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.369860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.369892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.369914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.369945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.369967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.369999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.370021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.370074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.370188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.370241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.370956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.370978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.371032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.371086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.371140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.916 [2024-11-04 10:05:54.371214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.371966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.371990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.372022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.372044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.372076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.372098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.372131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.372153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:42.916 [2024-11-04 10:05:54.372185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.916 [2024-11-04 10:05:54.372207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:05:54.372240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:05:54.372262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:05:54.372294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:05:54.372316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:05:54.372348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:05:54.372370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:05:54.372402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:05:54.372424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:05:54.372464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:05:54.372480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:05:54.373070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:05:54.373116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.917 8043.62 IOPS, 31.42 MiB/s [2024-11-04T10:06:15.087Z] 7570.47 IOPS, 29.57 MiB/s [2024-11-04T10:06:15.087Z] 7149.89 IOPS, 27.93 MiB/s [2024-11-04T10:06:15.087Z] 6773.58 IOPS, 26.46 MiB/s [2024-11-04T10:06:15.087Z] 6680.70 IOPS, 26.10 MiB/s [2024-11-04T10:06:15.087Z] 6790.29 IOPS, 26.52 MiB/s [2024-11-04T10:06:15.087Z] 6888.55 IOPS, 26.91 MiB/s [2024-11-04T10:06:15.087Z] 7095.26 IOPS, 27.72 MiB/s [2024-11-04T10:06:15.087Z] 7303.33 IOPS, 28.53 MiB/s [2024-11-04T10:06:15.087Z] 7430.40 IOPS, 29.02 MiB/s [2024-11-04T10:06:15.087Z] 7489.85 IOPS, 29.26 MiB/s [2024-11-04T10:06:15.087Z] 7509.04 IOPS, 29.33 MiB/s [2024-11-04T10:06:15.087Z] 7527.14 IOPS, 29.40 MiB/s [2024-11-04T10:06:15.087Z] 7526.90 IOPS, 29.40 MiB/s [2024-11-04T10:06:15.087Z] 7539.10 IOPS, 29.45 MiB/s [2024-11-04T10:06:15.087Z] 7614.06 IOPS, 29.74 MiB/s [2024-11-04T10:06:15.087Z] 7747.44 IOPS, 30.26 MiB/s [2024-11-04T10:06:15.087Z] 7887.55 IOPS, 30.81 MiB/s [2024-11-04T10:06:15.087Z] [2024-11-04 10:06:12.017032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.017947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.017969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.017985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.018054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.018089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.018125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.018160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.018208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.018244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.018280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.917 [2024-11-04 10:06:12.018316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.018352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:42.917 [2024-11-04 10:06:12.018372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.917 [2024-11-04 10:06:12.018387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.018531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.018566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.018603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.018698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.018736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.018929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.018952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.018982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.019590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.019652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.019667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.021066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.021116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.021155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.021192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.021229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.021266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.918 [2024-11-04 10:06:12.021304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.918 [2024-11-04 10:06:12.021340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:42.918 [2024-11-04 10:06:12.021362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.919 [2024-11-04 10:06:12.021378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.919 [2024-11-04 10:06:12.021415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.919 [2024-11-04 10:06:12.021451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.919 [2024-11-04 10:06:12.021488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.919 [2024-11-04 10:06:12.021552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.919 [2024-11-04 10:06:12.021615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.919 [2024-11-04 10:06:12.021688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.919 [2024-11-04 10:06:12.021727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.919 [2024-11-04 10:06:12.021766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:42.919 [2024-11-04 10:06:12.021805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.919 [2024-11-04 10:06:12.021844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:42.919 [2024-11-04 10:06:12.021866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:42.919 [2024-11-04 10:06:12.021882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:42.919 7938.59 IOPS, 31.01 MiB/s [2024-11-04T10:06:15.089Z] 7972.11 IOPS, 31.14 MiB/s [2024-11-04T10:06:15.089Z] Received shutdown signal, test time was about 35.884884 seconds 00:16:42.919 00:16:42.919 Latency(us) 00:16:42.919 [2024-11-04T10:06:15.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.919 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:42.919 Verification LBA range: start 0x0 length 0x4000 00:16:42.919 Nvme0n1 : 35.88 7996.86 31.24 0.00 0.00 15973.67 521.31 4057035.87 00:16:42.919 [2024-11-04T10:06:15.089Z] =================================================================================================================== 00:16:42.919 [2024-11-04T10:06:15.089Z] Total : 7996.86 31.24 0.00 0.00 15973.67 521.31 4057035.87 00:16:42.919 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.178 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:43.178 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:43.178 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:43.178 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:43.178 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:43.437 rmmod nvme_tcp 00:16:43.437 rmmod nvme_fabrics 00:16:43.437 rmmod nvme_keyring 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76152 ']' 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76152 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76152 ']' 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76152 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76152 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:43.437 killing process with pid 76152 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76152' 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76152 00:16:43.437 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76152 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:43.697 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:43.957 00:16:43.957 real 0m42.064s 00:16:43.957 user 2m16.587s 00:16:43.957 sys 0m12.487s 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:43.957 ************************************ 00:16:43.957 END TEST nvmf_host_multipath_status 00:16:43.957 ************************************ 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.957 ************************************ 00:16:43.957 START TEST nvmf_discovery_remove_ifc 00:16:43.957 ************************************ 00:16:43.957 10:06:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:43.957 * Looking for test storage... 00:16:43.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.957 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:43.957 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:43.957 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.218 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.219 --rc genhtml_branch_coverage=1 00:16:44.219 --rc genhtml_function_coverage=1 00:16:44.219 --rc genhtml_legend=1 00:16:44.219 --rc geninfo_all_blocks=1 00:16:44.219 --rc geninfo_unexecuted_blocks=1 00:16:44.219 00:16:44.219 ' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.219 --rc genhtml_branch_coverage=1 00:16:44.219 --rc genhtml_function_coverage=1 00:16:44.219 --rc genhtml_legend=1 00:16:44.219 --rc geninfo_all_blocks=1 00:16:44.219 --rc geninfo_unexecuted_blocks=1 00:16:44.219 00:16:44.219 ' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.219 --rc genhtml_branch_coverage=1 00:16:44.219 --rc genhtml_function_coverage=1 00:16:44.219 --rc genhtml_legend=1 00:16:44.219 --rc geninfo_all_blocks=1 00:16:44.219 --rc geninfo_unexecuted_blocks=1 00:16:44.219 00:16:44.219 ' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:44.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.219 --rc genhtml_branch_coverage=1 00:16:44.219 --rc genhtml_function_coverage=1 00:16:44.219 --rc genhtml_legend=1 00:16:44.219 --rc geninfo_all_blocks=1 00:16:44.219 --rc geninfo_unexecuted_blocks=1 00:16:44.219 00:16:44.219 ' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.219 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:44.219 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:44.220 Cannot find device "nvmf_init_br" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:44.220 Cannot find device "nvmf_init_br2" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:44.220 Cannot find device "nvmf_tgt_br" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.220 Cannot find device "nvmf_tgt_br2" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:44.220 Cannot find device "nvmf_init_br" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:44.220 Cannot find device "nvmf_init_br2" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:44.220 Cannot find device "nvmf_tgt_br" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:44.220 Cannot find device "nvmf_tgt_br2" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:44.220 Cannot find device "nvmf_br" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:44.220 Cannot find device "nvmf_init_if" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:44.220 Cannot find device "nvmf_init_if2" 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:44.220 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:44.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:44.480 00:16:44.480 --- 10.0.0.3 ping statistics --- 00:16:44.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.480 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:44.480 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:44.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:44.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:44.481 00:16:44.481 --- 10.0.0.4 ping statistics --- 00:16:44.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.481 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:44.481 00:16:44.481 --- 10.0.0.1 ping statistics --- 00:16:44.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.481 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:44.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:44.481 00:16:44.481 --- 10.0.0.2 ping statistics --- 00:16:44.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.481 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77064 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77064 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77064 ']' 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:44.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:44.481 10:06:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.740 [2024-11-04 10:06:16.689670] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:16:44.740 [2024-11-04 10:06:16.689770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.740 [2024-11-04 10:06:16.838860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.740 [2024-11-04 10:06:16.904220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.740 [2024-11-04 10:06:16.904301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.740 [2024-11-04 10:06:16.904323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.740 [2024-11-04 10:06:16.904345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.740 [2024-11-04 10:06:16.904354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.740 [2024-11-04 10:06:16.904823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.999 [2024-11-04 10:06:16.963721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.999 [2024-11-04 10:06:17.090284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.999 [2024-11-04 10:06:17.098439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:44.999 null0 00:16:44.999 [2024-11-04 10:06:17.130330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77089 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77089 /tmp/host.sock 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77089 ']' 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:44.999 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:44.999 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.258 [2024-11-04 10:06:17.214921] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:16:45.258 [2024-11-04 10:06:17.215038] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77089 ] 00:16:45.258 [2024-11-04 10:06:17.366707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.518 [2024-11-04 10:06:17.430553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.518 [2024-11-04 10:06:17.550194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.518 10:06:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.454 [2024-11-04 10:06:18.603424] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:46.454 [2024-11-04 10:06:18.603481] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:46.454 [2024-11-04 10:06:18.603504] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:46.454 [2024-11-04 10:06:18.609465] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:46.731 [2024-11-04 10:06:18.663917] bdev_nvme.c:5633:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:46.731 [2024-11-04 10:06:18.664899] bdev_nvme.c:1977:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5c4fa0:1 started. 00:16:46.731 [2024-11-04 10:06:18.666772] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:46.731 [2024-11-04 10:06:18.666846] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:46.731 [2024-11-04 10:06:18.666874] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:46.731 [2024-11-04 10:06:18.666891] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:46.731 [2024-11-04 10:06:18.666915] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:46.731 [2024-11-04 10:06:18.672137] bdev_nvme.c:1783:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5c4fa0 was disconnected and freed. delete nvme_qpair. 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:46.731 10:06:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.666 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.925 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:47.925 10:06:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:48.863 10:06:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:49.800 10:06:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.177 10:06:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.177 10:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.177 10:06:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.152 10:06:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.152 [2024-11-04 10:06:24.094838] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:52.152 [2024-11-04 10:06:24.094913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.152 [2024-11-04 10:06:24.094929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.152 [2024-11-04 10:06:24.094942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.152 [2024-11-04 10:06:24.094951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.152 [2024-11-04 10:06:24.094961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.152 [2024-11-04 10:06:24.094982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.152 [2024-11-04 10:06:24.095007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.152 [2024-11-04 10:06:24.095016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.152 [2024-11-04 10:06:24.095041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:52.152 [2024-11-04 10:06:24.095065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:52.152 [2024-11-04 10:06:24.095074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1240 is same with the state(6) to be set 00:16:52.152 [2024-11-04 10:06:24.104835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a1240 (9): Bad file descriptor 00:16:52.152 [2024-11-04 10:06:24.114848] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:52.152 [2024-11-04 10:06:24.114888] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:52.152 [2024-11-04 10:06:24.114895] bdev_nvme.c:2126:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:52.152 [2024-11-04 10:06:24.114901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:52.152 [2024-11-04 10:06:24.114955] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.092 [2024-11-04 10:06:25.135701] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:53.092 [2024-11-04 10:06:25.135814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a1240 with addr=10.0.0.3, port=4420 00:16:53.092 [2024-11-04 10:06:25.135842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1240 is same with the state(6) to be set 00:16:53.092 [2024-11-04 10:06:25.135892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a1240 (9): Bad file descriptor 00:16:53.092 [2024-11-04 10:06:25.136599] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:53.092 [2024-11-04 10:06:25.136697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:53.092 [2024-11-04 10:06:25.136717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:53.092 [2024-11-04 10:06:25.136736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:53.092 [2024-11-04 10:06:25.136754] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:53.092 [2024-11-04 10:06:25.136765] bdev_nvme.c:2319:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:53.092 [2024-11-04 10:06:25.136794] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:53.092 [2024-11-04 10:06:25.136813] bdev_nvme.c:2126:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:53.092 [2024-11-04 10:06:25.136824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.092 10:06:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.030 [2024-11-04 10:06:26.136877] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:54.030 [2024-11-04 10:06:26.136930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:54.030 [2024-11-04 10:06:26.136960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:54.030 [2024-11-04 10:06:26.136971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:54.030 [2024-11-04 10:06:26.136981] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:54.030 [2024-11-04 10:06:26.136992] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:54.030 [2024-11-04 10:06:26.136999] bdev_nvme.c:2319:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:54.030 [2024-11-04 10:06:26.137018] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:54.030 [2024-11-04 10:06:26.137048] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:54.030 [2024-11-04 10:06:26.137112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.030 [2024-11-04 10:06:26.137127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.030 [2024-11-04 10:06:26.137140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.030 [2024-11-04 10:06:26.137149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.030 [2024-11-04 10:06:26.137159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.030 [2024-11-04 10:06:26.137168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.030 [2024-11-04 10:06:26.137177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.030 [2024-11-04 10:06:26.137186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.030 [2024-11-04 10:06:26.137212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.030 [2024-11-04 10:06:26.137221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.030 [2024-11-04 10:06:26.137230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:54.030 [2024-11-04 10:06:26.137540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52ca20 (9): Bad file descriptor 00:16:54.030 [2024-11-04 10:06:26.138553] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:54.030 [2024-11-04 10:06:26.138610] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.030 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:54.290 10:06:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:55.226 10:06:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.160 [2024-11-04 10:06:28.148543] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:56.160 [2024-11-04 10:06:28.148575] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:56.160 [2024-11-04 10:06:28.148636] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:56.160 [2024-11-04 10:06:28.154581] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:56.160 [2024-11-04 10:06:28.209039] bdev_nvme.c:5633:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:56.160 [2024-11-04 10:06:28.209834] bdev_nvme.c:1977:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x57d9f0:1 started. 00:16:56.160 [2024-11-04 10:06:28.211384] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:56.160 [2024-11-04 10:06:28.211443] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:56.160 [2024-11-04 10:06:28.211467] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:56.160 [2024-11-04 10:06:28.211483] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:56.160 [2024-11-04 10:06:28.211491] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:56.160 [2024-11-04 10:06:28.217210] bdev_nvme.c:1783:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x57d9f0 was disconnected and freed. delete nvme_qpair. 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77089 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77089 ']' 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77089 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77089 00:16:56.419 killing process with pid 77089 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77089' 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77089 00:16:56.419 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77089 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:56.705 rmmod nvme_tcp 00:16:56.705 rmmod nvme_fabrics 00:16:56.705 rmmod nvme_keyring 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77064 ']' 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77064 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77064 ']' 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77064 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77064 00:16:56.705 killing process with pid 77064 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77064' 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77064 00:16:56.705 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77064 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:56.964 10:06:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:56.964 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:57.223 00:16:57.223 real 0m13.196s 00:16:57.223 user 0m22.359s 00:16:57.223 sys 0m2.428s 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.223 ************************************ 00:16:57.223 END TEST nvmf_discovery_remove_ifc 00:16:57.223 ************************************ 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:57.223 10:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:57.224 10:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.224 ************************************ 00:16:57.224 START TEST nvmf_identify_kernel_target 00:16:57.224 ************************************ 00:16:57.224 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:57.224 * Looking for test storage... 00:16:57.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:57.224 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:57.224 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:57.224 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:57.483 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.484 --rc genhtml_branch_coverage=1 00:16:57.484 --rc genhtml_function_coverage=1 00:16:57.484 --rc genhtml_legend=1 00:16:57.484 --rc geninfo_all_blocks=1 00:16:57.484 --rc geninfo_unexecuted_blocks=1 00:16:57.484 00:16:57.484 ' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.484 --rc genhtml_branch_coverage=1 00:16:57.484 --rc genhtml_function_coverage=1 00:16:57.484 --rc genhtml_legend=1 00:16:57.484 --rc geninfo_all_blocks=1 00:16:57.484 --rc geninfo_unexecuted_blocks=1 00:16:57.484 00:16:57.484 ' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.484 --rc genhtml_branch_coverage=1 00:16:57.484 --rc genhtml_function_coverage=1 00:16:57.484 --rc genhtml_legend=1 00:16:57.484 --rc geninfo_all_blocks=1 00:16:57.484 --rc geninfo_unexecuted_blocks=1 00:16:57.484 00:16:57.484 ' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.484 --rc genhtml_branch_coverage=1 00:16:57.484 --rc genhtml_function_coverage=1 00:16:57.484 --rc genhtml_legend=1 00:16:57.484 --rc geninfo_all_blocks=1 00:16:57.484 --rc geninfo_unexecuted_blocks=1 00:16:57.484 00:16:57.484 ' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.484 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:57.484 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:57.485 Cannot find device "nvmf_init_br" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:57.485 Cannot find device "nvmf_init_br2" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:57.485 Cannot find device "nvmf_tgt_br" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.485 Cannot find device "nvmf_tgt_br2" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:57.485 Cannot find device "nvmf_init_br" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:57.485 Cannot find device "nvmf_init_br2" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:57.485 Cannot find device "nvmf_tgt_br" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:57.485 Cannot find device "nvmf_tgt_br2" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:57.485 Cannot find device "nvmf_br" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:57.485 Cannot find device "nvmf_init_if" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:57.485 Cannot find device "nvmf_init_if2" 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:57.485 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:57.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:57.744 00:16:57.744 --- 10.0.0.3 ping statistics --- 00:16:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.744 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:57.744 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:57.744 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:16:57.744 00:16:57.744 --- 10.0.0.4 ping statistics --- 00:16:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.744 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:57.744 00:16:57.744 --- 10.0.0.1 ping statistics --- 00:16:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.744 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:57.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:57.744 00:16:57.744 --- 10.0.0.2 ping statistics --- 00:16:57.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.744 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:57.744 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:57.745 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:57.745 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:57.745 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:57.745 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:57.745 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:57.745 10:06:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:58.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:58.263 Waiting for block devices as requested 00:16:58.263 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.263 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:58.263 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:58.521 No valid GPT data, bailing 00:16:58.521 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:58.521 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:58.522 No valid GPT data, bailing 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:58.522 No valid GPT data, bailing 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:58.522 No valid GPT data, bailing 00:16:58.522 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:58.781 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -a 10.0.0.1 -t tcp -s 4420 00:16:58.781 00:16:58.781 Discovery Log Number of Records 2, Generation counter 2 00:16:58.781 =====Discovery Log Entry 0====== 00:16:58.781 trtype: tcp 00:16:58.781 adrfam: ipv4 00:16:58.781 subtype: current discovery subsystem 00:16:58.781 treq: not specified, sq flow control disable supported 00:16:58.781 portid: 1 00:16:58.781 trsvcid: 4420 00:16:58.781 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:58.781 traddr: 10.0.0.1 00:16:58.781 eflags: none 00:16:58.781 sectype: none 00:16:58.781 =====Discovery Log Entry 1====== 00:16:58.781 trtype: tcp 00:16:58.781 adrfam: ipv4 00:16:58.781 subtype: nvme subsystem 00:16:58.781 treq: not specified, sq flow control disable supported 00:16:58.781 portid: 1 00:16:58.781 trsvcid: 4420 00:16:58.781 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:58.781 traddr: 10.0.0.1 00:16:58.781 eflags: none 00:16:58.781 sectype: none 00:16:58.782 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:58.782 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:58.782 ===================================================== 00:16:58.782 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:58.782 ===================================================== 00:16:58.782 Controller Capabilities/Features 00:16:58.782 ================================ 00:16:58.782 Vendor ID: 0000 00:16:58.782 Subsystem Vendor ID: 0000 00:16:58.782 Serial Number: 669ef5e11402e1c56a65 00:16:58.782 Model Number: Linux 00:16:58.782 Firmware Version: 6.8.9-20 00:16:58.782 Recommended Arb Burst: 0 00:16:58.782 IEEE OUI Identifier: 00 00 00 00:16:58.782 Multi-path I/O 00:16:58.782 May have multiple subsystem ports: No 00:16:58.782 May have multiple controllers: No 00:16:58.782 Associated with SR-IOV VF: No 00:16:58.782 Max Data Transfer Size: Unlimited 00:16:58.782 Max Number of Namespaces: 0 00:16:58.782 Max Number of I/O Queues: 1024 00:16:58.782 NVMe Specification Version (VS): 1.3 00:16:58.782 NVMe Specification Version (Identify): 1.3 00:16:58.782 Maximum Queue Entries: 1024 00:16:58.782 Contiguous Queues Required: No 00:16:58.782 Arbitration Mechanisms Supported 00:16:58.782 Weighted Round Robin: Not Supported 00:16:58.782 Vendor Specific: Not Supported 00:16:58.782 Reset Timeout: 7500 ms 00:16:58.782 Doorbell Stride: 4 bytes 00:16:58.782 NVM Subsystem Reset: Not Supported 00:16:58.782 Command Sets Supported 00:16:58.782 NVM Command Set: Supported 00:16:58.782 Boot Partition: Not Supported 00:16:58.782 Memory Page Size Minimum: 4096 bytes 00:16:58.782 Memory Page Size Maximum: 4096 bytes 00:16:58.782 Persistent Memory Region: Not Supported 00:16:58.782 Optional Asynchronous Events Supported 00:16:58.782 Namespace Attribute Notices: Not Supported 00:16:58.782 Firmware Activation Notices: Not Supported 00:16:58.782 ANA Change Notices: Not Supported 00:16:58.782 PLE Aggregate Log Change Notices: Not Supported 00:16:58.782 LBA Status Info Alert Notices: Not Supported 00:16:58.782 EGE Aggregate Log Change Notices: Not Supported 00:16:58.782 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.782 Zone Descriptor Change Notices: Not Supported 00:16:58.782 Discovery Log Change Notices: Supported 00:16:58.782 Controller Attributes 00:16:58.782 128-bit Host Identifier: Not Supported 00:16:58.782 Non-Operational Permissive Mode: Not Supported 00:16:58.782 NVM Sets: Not Supported 00:16:58.782 Read Recovery Levels: Not Supported 00:16:58.782 Endurance Groups: Not Supported 00:16:58.782 Predictable Latency Mode: Not Supported 00:16:58.782 Traffic Based Keep ALive: Not Supported 00:16:58.782 Namespace Granularity: Not Supported 00:16:58.782 SQ Associations: Not Supported 00:16:58.782 UUID List: Not Supported 00:16:58.782 Multi-Domain Subsystem: Not Supported 00:16:58.782 Fixed Capacity Management: Not Supported 00:16:58.782 Variable Capacity Management: Not Supported 00:16:58.782 Delete Endurance Group: Not Supported 00:16:58.782 Delete NVM Set: Not Supported 00:16:58.782 Extended LBA Formats Supported: Not Supported 00:16:58.782 Flexible Data Placement Supported: Not Supported 00:16:58.782 00:16:58.782 Controller Memory Buffer Support 00:16:58.782 ================================ 00:16:58.782 Supported: No 00:16:58.782 00:16:58.782 Persistent Memory Region Support 00:16:58.782 ================================ 00:16:58.782 Supported: No 00:16:58.782 00:16:58.782 Admin Command Set Attributes 00:16:58.782 ============================ 00:16:58.782 Security Send/Receive: Not Supported 00:16:58.782 Format NVM: Not Supported 00:16:58.782 Firmware Activate/Download: Not Supported 00:16:58.782 Namespace Management: Not Supported 00:16:58.782 Device Self-Test: Not Supported 00:16:58.782 Directives: Not Supported 00:16:58.782 NVMe-MI: Not Supported 00:16:58.782 Virtualization Management: Not Supported 00:16:58.782 Doorbell Buffer Config: Not Supported 00:16:58.782 Get LBA Status Capability: Not Supported 00:16:58.782 Command & Feature Lockdown Capability: Not Supported 00:16:58.782 Abort Command Limit: 1 00:16:58.782 Async Event Request Limit: 1 00:16:58.782 Number of Firmware Slots: N/A 00:16:58.782 Firmware Slot 1 Read-Only: N/A 00:16:58.782 Firmware Activation Without Reset: N/A 00:16:58.782 Multiple Update Detection Support: N/A 00:16:58.782 Firmware Update Granularity: No Information Provided 00:16:58.782 Per-Namespace SMART Log: No 00:16:58.782 Asymmetric Namespace Access Log Page: Not Supported 00:16:58.782 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:58.782 Command Effects Log Page: Not Supported 00:16:58.782 Get Log Page Extended Data: Supported 00:16:58.782 Telemetry Log Pages: Not Supported 00:16:58.782 Persistent Event Log Pages: Not Supported 00:16:58.782 Supported Log Pages Log Page: May Support 00:16:58.782 Commands Supported & Effects Log Page: Not Supported 00:16:58.782 Feature Identifiers & Effects Log Page:May Support 00:16:58.782 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.782 Data Area 4 for Telemetry Log: Not Supported 00:16:58.782 Error Log Page Entries Supported: 1 00:16:58.782 Keep Alive: Not Supported 00:16:58.782 00:16:58.782 NVM Command Set Attributes 00:16:58.782 ========================== 00:16:58.782 Submission Queue Entry Size 00:16:58.782 Max: 1 00:16:58.782 Min: 1 00:16:58.782 Completion Queue Entry Size 00:16:58.782 Max: 1 00:16:58.782 Min: 1 00:16:58.782 Number of Namespaces: 0 00:16:58.782 Compare Command: Not Supported 00:16:58.782 Write Uncorrectable Command: Not Supported 00:16:58.782 Dataset Management Command: Not Supported 00:16:58.782 Write Zeroes Command: Not Supported 00:16:58.782 Set Features Save Field: Not Supported 00:16:58.782 Reservations: Not Supported 00:16:58.782 Timestamp: Not Supported 00:16:58.782 Copy: Not Supported 00:16:58.782 Volatile Write Cache: Not Present 00:16:58.782 Atomic Write Unit (Normal): 1 00:16:58.782 Atomic Write Unit (PFail): 1 00:16:58.782 Atomic Compare & Write Unit: 1 00:16:58.782 Fused Compare & Write: Not Supported 00:16:58.782 Scatter-Gather List 00:16:58.782 SGL Command Set: Supported 00:16:58.782 SGL Keyed: Not Supported 00:16:58.782 SGL Bit Bucket Descriptor: Not Supported 00:16:58.782 SGL Metadata Pointer: Not Supported 00:16:58.782 Oversized SGL: Not Supported 00:16:58.782 SGL Metadata Address: Not Supported 00:16:58.782 SGL Offset: Supported 00:16:58.782 Transport SGL Data Block: Not Supported 00:16:58.782 Replay Protected Memory Block: Not Supported 00:16:58.782 00:16:58.782 Firmware Slot Information 00:16:58.782 ========================= 00:16:58.782 Active slot: 0 00:16:58.782 00:16:58.782 00:16:58.782 Error Log 00:16:58.782 ========= 00:16:58.782 00:16:58.782 Active Namespaces 00:16:58.782 ================= 00:16:58.782 Discovery Log Page 00:16:58.782 ================== 00:16:58.782 Generation Counter: 2 00:16:58.782 Number of Records: 2 00:16:58.782 Record Format: 0 00:16:58.782 00:16:58.782 Discovery Log Entry 0 00:16:58.782 ---------------------- 00:16:58.782 Transport Type: 3 (TCP) 00:16:58.782 Address Family: 1 (IPv4) 00:16:58.782 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:58.782 Entry Flags: 00:16:58.782 Duplicate Returned Information: 0 00:16:58.782 Explicit Persistent Connection Support for Discovery: 0 00:16:58.782 Transport Requirements: 00:16:58.782 Secure Channel: Not Specified 00:16:58.782 Port ID: 1 (0x0001) 00:16:58.782 Controller ID: 65535 (0xffff) 00:16:58.782 Admin Max SQ Size: 32 00:16:58.782 Transport Service Identifier: 4420 00:16:58.782 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:58.782 Transport Address: 10.0.0.1 00:16:58.782 Discovery Log Entry 1 00:16:58.782 ---------------------- 00:16:58.782 Transport Type: 3 (TCP) 00:16:58.782 Address Family: 1 (IPv4) 00:16:58.782 Subsystem Type: 2 (NVM Subsystem) 00:16:58.782 Entry Flags: 00:16:58.782 Duplicate Returned Information: 0 00:16:58.783 Explicit Persistent Connection Support for Discovery: 0 00:16:58.783 Transport Requirements: 00:16:58.783 Secure Channel: Not Specified 00:16:58.783 Port ID: 1 (0x0001) 00:16:58.783 Controller ID: 65535 (0xffff) 00:16:58.783 Admin Max SQ Size: 32 00:16:58.783 Transport Service Identifier: 4420 00:16:58.783 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:58.783 Transport Address: 10.0.0.1 00:16:58.783 10:06:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:59.042 get_feature(0x01) failed 00:16:59.042 get_feature(0x02) failed 00:16:59.042 get_feature(0x04) failed 00:16:59.042 ===================================================== 00:16:59.042 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:59.042 ===================================================== 00:16:59.042 Controller Capabilities/Features 00:16:59.042 ================================ 00:16:59.042 Vendor ID: 0000 00:16:59.042 Subsystem Vendor ID: 0000 00:16:59.042 Serial Number: 4173db9501af1a545677 00:16:59.042 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:59.042 Firmware Version: 6.8.9-20 00:16:59.042 Recommended Arb Burst: 6 00:16:59.042 IEEE OUI Identifier: 00 00 00 00:16:59.042 Multi-path I/O 00:16:59.042 May have multiple subsystem ports: Yes 00:16:59.042 May have multiple controllers: Yes 00:16:59.042 Associated with SR-IOV VF: No 00:16:59.042 Max Data Transfer Size: Unlimited 00:16:59.042 Max Number of Namespaces: 1024 00:16:59.042 Max Number of I/O Queues: 128 00:16:59.042 NVMe Specification Version (VS): 1.3 00:16:59.042 NVMe Specification Version (Identify): 1.3 00:16:59.042 Maximum Queue Entries: 1024 00:16:59.042 Contiguous Queues Required: No 00:16:59.042 Arbitration Mechanisms Supported 00:16:59.042 Weighted Round Robin: Not Supported 00:16:59.042 Vendor Specific: Not Supported 00:16:59.042 Reset Timeout: 7500 ms 00:16:59.042 Doorbell Stride: 4 bytes 00:16:59.042 NVM Subsystem Reset: Not Supported 00:16:59.042 Command Sets Supported 00:16:59.042 NVM Command Set: Supported 00:16:59.042 Boot Partition: Not Supported 00:16:59.043 Memory Page Size Minimum: 4096 bytes 00:16:59.043 Memory Page Size Maximum: 4096 bytes 00:16:59.043 Persistent Memory Region: Not Supported 00:16:59.043 Optional Asynchronous Events Supported 00:16:59.043 Namespace Attribute Notices: Supported 00:16:59.043 Firmware Activation Notices: Not Supported 00:16:59.043 ANA Change Notices: Supported 00:16:59.043 PLE Aggregate Log Change Notices: Not Supported 00:16:59.043 LBA Status Info Alert Notices: Not Supported 00:16:59.043 EGE Aggregate Log Change Notices: Not Supported 00:16:59.043 Normal NVM Subsystem Shutdown event: Not Supported 00:16:59.043 Zone Descriptor Change Notices: Not Supported 00:16:59.043 Discovery Log Change Notices: Not Supported 00:16:59.043 Controller Attributes 00:16:59.043 128-bit Host Identifier: Supported 00:16:59.043 Non-Operational Permissive Mode: Not Supported 00:16:59.043 NVM Sets: Not Supported 00:16:59.043 Read Recovery Levels: Not Supported 00:16:59.043 Endurance Groups: Not Supported 00:16:59.043 Predictable Latency Mode: Not Supported 00:16:59.043 Traffic Based Keep ALive: Supported 00:16:59.043 Namespace Granularity: Not Supported 00:16:59.043 SQ Associations: Not Supported 00:16:59.043 UUID List: Not Supported 00:16:59.043 Multi-Domain Subsystem: Not Supported 00:16:59.043 Fixed Capacity Management: Not Supported 00:16:59.043 Variable Capacity Management: Not Supported 00:16:59.043 Delete Endurance Group: Not Supported 00:16:59.043 Delete NVM Set: Not Supported 00:16:59.043 Extended LBA Formats Supported: Not Supported 00:16:59.043 Flexible Data Placement Supported: Not Supported 00:16:59.043 00:16:59.043 Controller Memory Buffer Support 00:16:59.043 ================================ 00:16:59.043 Supported: No 00:16:59.043 00:16:59.043 Persistent Memory Region Support 00:16:59.043 ================================ 00:16:59.043 Supported: No 00:16:59.043 00:16:59.043 Admin Command Set Attributes 00:16:59.043 ============================ 00:16:59.043 Security Send/Receive: Not Supported 00:16:59.043 Format NVM: Not Supported 00:16:59.043 Firmware Activate/Download: Not Supported 00:16:59.043 Namespace Management: Not Supported 00:16:59.043 Device Self-Test: Not Supported 00:16:59.043 Directives: Not Supported 00:16:59.043 NVMe-MI: Not Supported 00:16:59.043 Virtualization Management: Not Supported 00:16:59.043 Doorbell Buffer Config: Not Supported 00:16:59.043 Get LBA Status Capability: Not Supported 00:16:59.043 Command & Feature Lockdown Capability: Not Supported 00:16:59.043 Abort Command Limit: 4 00:16:59.043 Async Event Request Limit: 4 00:16:59.043 Number of Firmware Slots: N/A 00:16:59.043 Firmware Slot 1 Read-Only: N/A 00:16:59.043 Firmware Activation Without Reset: N/A 00:16:59.043 Multiple Update Detection Support: N/A 00:16:59.043 Firmware Update Granularity: No Information Provided 00:16:59.043 Per-Namespace SMART Log: Yes 00:16:59.043 Asymmetric Namespace Access Log Page: Supported 00:16:59.043 ANA Transition Time : 10 sec 00:16:59.043 00:16:59.043 Asymmetric Namespace Access Capabilities 00:16:59.043 ANA Optimized State : Supported 00:16:59.043 ANA Non-Optimized State : Supported 00:16:59.043 ANA Inaccessible State : Supported 00:16:59.043 ANA Persistent Loss State : Supported 00:16:59.043 ANA Change State : Supported 00:16:59.043 ANAGRPID is not changed : No 00:16:59.043 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:59.043 00:16:59.043 ANA Group Identifier Maximum : 128 00:16:59.043 Number of ANA Group Identifiers : 128 00:16:59.043 Max Number of Allowed Namespaces : 1024 00:16:59.043 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:59.043 Command Effects Log Page: Supported 00:16:59.043 Get Log Page Extended Data: Supported 00:16:59.043 Telemetry Log Pages: Not Supported 00:16:59.043 Persistent Event Log Pages: Not Supported 00:16:59.043 Supported Log Pages Log Page: May Support 00:16:59.043 Commands Supported & Effects Log Page: Not Supported 00:16:59.043 Feature Identifiers & Effects Log Page:May Support 00:16:59.043 NVMe-MI Commands & Effects Log Page: May Support 00:16:59.043 Data Area 4 for Telemetry Log: Not Supported 00:16:59.043 Error Log Page Entries Supported: 128 00:16:59.043 Keep Alive: Supported 00:16:59.043 Keep Alive Granularity: 1000 ms 00:16:59.043 00:16:59.043 NVM Command Set Attributes 00:16:59.043 ========================== 00:16:59.043 Submission Queue Entry Size 00:16:59.043 Max: 64 00:16:59.043 Min: 64 00:16:59.043 Completion Queue Entry Size 00:16:59.043 Max: 16 00:16:59.043 Min: 16 00:16:59.043 Number of Namespaces: 1024 00:16:59.043 Compare Command: Not Supported 00:16:59.043 Write Uncorrectable Command: Not Supported 00:16:59.043 Dataset Management Command: Supported 00:16:59.043 Write Zeroes Command: Supported 00:16:59.043 Set Features Save Field: Not Supported 00:16:59.043 Reservations: Not Supported 00:16:59.043 Timestamp: Not Supported 00:16:59.043 Copy: Not Supported 00:16:59.043 Volatile Write Cache: Present 00:16:59.043 Atomic Write Unit (Normal): 1 00:16:59.043 Atomic Write Unit (PFail): 1 00:16:59.043 Atomic Compare & Write Unit: 1 00:16:59.043 Fused Compare & Write: Not Supported 00:16:59.043 Scatter-Gather List 00:16:59.043 SGL Command Set: Supported 00:16:59.043 SGL Keyed: Not Supported 00:16:59.043 SGL Bit Bucket Descriptor: Not Supported 00:16:59.043 SGL Metadata Pointer: Not Supported 00:16:59.043 Oversized SGL: Not Supported 00:16:59.043 SGL Metadata Address: Not Supported 00:16:59.043 SGL Offset: Supported 00:16:59.043 Transport SGL Data Block: Not Supported 00:16:59.043 Replay Protected Memory Block: Not Supported 00:16:59.043 00:16:59.043 Firmware Slot Information 00:16:59.043 ========================= 00:16:59.043 Active slot: 0 00:16:59.043 00:16:59.043 Asymmetric Namespace Access 00:16:59.043 =========================== 00:16:59.043 Change Count : 0 00:16:59.043 Number of ANA Group Descriptors : 1 00:16:59.043 ANA Group Descriptor : 0 00:16:59.043 ANA Group ID : 1 00:16:59.043 Number of NSID Values : 1 00:16:59.043 Change Count : 0 00:16:59.043 ANA State : 1 00:16:59.043 Namespace Identifier : 1 00:16:59.043 00:16:59.043 Commands Supported and Effects 00:16:59.043 ============================== 00:16:59.043 Admin Commands 00:16:59.043 -------------- 00:16:59.043 Get Log Page (02h): Supported 00:16:59.043 Identify (06h): Supported 00:16:59.043 Abort (08h): Supported 00:16:59.043 Set Features (09h): Supported 00:16:59.043 Get Features (0Ah): Supported 00:16:59.043 Asynchronous Event Request (0Ch): Supported 00:16:59.043 Keep Alive (18h): Supported 00:16:59.043 I/O Commands 00:16:59.043 ------------ 00:16:59.043 Flush (00h): Supported 00:16:59.043 Write (01h): Supported LBA-Change 00:16:59.043 Read (02h): Supported 00:16:59.043 Write Zeroes (08h): Supported LBA-Change 00:16:59.043 Dataset Management (09h): Supported 00:16:59.043 00:16:59.043 Error Log 00:16:59.043 ========= 00:16:59.043 Entry: 0 00:16:59.043 Error Count: 0x3 00:16:59.043 Submission Queue Id: 0x0 00:16:59.043 Command Id: 0x5 00:16:59.043 Phase Bit: 0 00:16:59.043 Status Code: 0x2 00:16:59.043 Status Code Type: 0x0 00:16:59.043 Do Not Retry: 1 00:16:59.043 Error Location: 0x28 00:16:59.043 LBA: 0x0 00:16:59.043 Namespace: 0x0 00:16:59.043 Vendor Log Page: 0x0 00:16:59.043 ----------- 00:16:59.043 Entry: 1 00:16:59.043 Error Count: 0x2 00:16:59.043 Submission Queue Id: 0x0 00:16:59.043 Command Id: 0x5 00:16:59.043 Phase Bit: 0 00:16:59.043 Status Code: 0x2 00:16:59.043 Status Code Type: 0x0 00:16:59.043 Do Not Retry: 1 00:16:59.043 Error Location: 0x28 00:16:59.043 LBA: 0x0 00:16:59.043 Namespace: 0x0 00:16:59.043 Vendor Log Page: 0x0 00:16:59.043 ----------- 00:16:59.043 Entry: 2 00:16:59.043 Error Count: 0x1 00:16:59.043 Submission Queue Id: 0x0 00:16:59.043 Command Id: 0x4 00:16:59.043 Phase Bit: 0 00:16:59.043 Status Code: 0x2 00:16:59.043 Status Code Type: 0x0 00:16:59.043 Do Not Retry: 1 00:16:59.043 Error Location: 0x28 00:16:59.043 LBA: 0x0 00:16:59.043 Namespace: 0x0 00:16:59.043 Vendor Log Page: 0x0 00:16:59.043 00:16:59.043 Number of Queues 00:16:59.043 ================ 00:16:59.043 Number of I/O Submission Queues: 128 00:16:59.043 Number of I/O Completion Queues: 128 00:16:59.043 00:16:59.043 ZNS Specific Controller Data 00:16:59.043 ============================ 00:16:59.043 Zone Append Size Limit: 0 00:16:59.043 00:16:59.043 00:16:59.043 Active Namespaces 00:16:59.043 ================= 00:16:59.043 get_feature(0x05) failed 00:16:59.043 Namespace ID:1 00:16:59.043 Command Set Identifier: NVM (00h) 00:16:59.043 Deallocate: Supported 00:16:59.044 Deallocated/Unwritten Error: Not Supported 00:16:59.044 Deallocated Read Value: Unknown 00:16:59.044 Deallocate in Write Zeroes: Not Supported 00:16:59.044 Deallocated Guard Field: 0xFFFF 00:16:59.044 Flush: Supported 00:16:59.044 Reservation: Not Supported 00:16:59.044 Namespace Sharing Capabilities: Multiple Controllers 00:16:59.044 Size (in LBAs): 1310720 (5GiB) 00:16:59.044 Capacity (in LBAs): 1310720 (5GiB) 00:16:59.044 Utilization (in LBAs): 1310720 (5GiB) 00:16:59.044 UUID: fd861dc6-463a-49e3-b824-20ed09440fd9 00:16:59.044 Thin Provisioning: Not Supported 00:16:59.044 Per-NS Atomic Units: Yes 00:16:59.044 Atomic Boundary Size (Normal): 0 00:16:59.044 Atomic Boundary Size (PFail): 0 00:16:59.044 Atomic Boundary Offset: 0 00:16:59.044 NGUID/EUI64 Never Reused: No 00:16:59.044 ANA group ID: 1 00:16:59.044 Namespace Write Protected: No 00:16:59.044 Number of LBA Formats: 1 00:16:59.044 Current LBA Format: LBA Format #00 00:16:59.044 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:59.044 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.044 rmmod nvme_tcp 00:16:59.044 rmmod nvme_fabrics 00:16:59.044 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:59.302 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:59.562 10:06:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:00.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:00.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:00.388 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:00.388 ************************************ 00:17:00.388 END TEST nvmf_identify_kernel_target 00:17:00.388 ************************************ 00:17:00.388 00:17:00.388 real 0m3.180s 00:17:00.388 user 0m1.153s 00:17:00.388 sys 0m1.436s 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.388 ************************************ 00:17:00.388 START TEST nvmf_auth_host 00:17:00.388 ************************************ 00:17:00.388 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:00.388 * Looking for test storage... 00:17:00.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.648 --rc genhtml_branch_coverage=1 00:17:00.648 --rc genhtml_function_coverage=1 00:17:00.648 --rc genhtml_legend=1 00:17:00.648 --rc geninfo_all_blocks=1 00:17:00.648 --rc geninfo_unexecuted_blocks=1 00:17:00.648 00:17:00.648 ' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.648 --rc genhtml_branch_coverage=1 00:17:00.648 --rc genhtml_function_coverage=1 00:17:00.648 --rc genhtml_legend=1 00:17:00.648 --rc geninfo_all_blocks=1 00:17:00.648 --rc geninfo_unexecuted_blocks=1 00:17:00.648 00:17:00.648 ' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.648 --rc genhtml_branch_coverage=1 00:17:00.648 --rc genhtml_function_coverage=1 00:17:00.648 --rc genhtml_legend=1 00:17:00.648 --rc geninfo_all_blocks=1 00:17:00.648 --rc geninfo_unexecuted_blocks=1 00:17:00.648 00:17:00.648 ' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:00.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.648 --rc genhtml_branch_coverage=1 00:17:00.648 --rc genhtml_function_coverage=1 00:17:00.648 --rc genhtml_legend=1 00:17:00.648 --rc geninfo_all_blocks=1 00:17:00.648 --rc geninfo_unexecuted_blocks=1 00:17:00.648 00:17:00.648 ' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.648 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.649 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:00.649 Cannot find device "nvmf_init_br" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:00.649 Cannot find device "nvmf_init_br2" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:00.649 Cannot find device "nvmf_tgt_br" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:00.649 Cannot find device "nvmf_tgt_br2" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:00.649 Cannot find device "nvmf_init_br" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:00.649 Cannot find device "nvmf_init_br2" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:00.649 Cannot find device "nvmf_tgt_br" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:00.649 Cannot find device "nvmf_tgt_br2" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:00.649 Cannot find device "nvmf_br" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:00.649 Cannot find device "nvmf_init_if" 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:00.649 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:00.909 Cannot find device "nvmf_init_if2" 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:00.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:00.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:00.909 10:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:00.909 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:00.910 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:00.910 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:00.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:00.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:17:00.910 00:17:00.910 --- 10.0.0.3 ping statistics --- 00:17:00.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.910 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:00.910 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:00.910 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:00.910 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:17:00.910 00:17:00.910 --- 10.0.0.4 ping statistics --- 00:17:00.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.910 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:00.910 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:01.169 00:17:01.169 --- 10.0.0.1 ping statistics --- 00:17:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.169 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:01.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:01.169 00:17:01.169 --- 10.0.0.2 ping statistics --- 00:17:01.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.169 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78074 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78074 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78074 ']' 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.169 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=885823e1f187d11cf52b726845f97f0e 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.e55 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 885823e1f187d11cf52b726845f97f0e 0 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 885823e1f187d11cf52b726845f97f0e 0 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=885823e1f187d11cf52b726845f97f0e 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:01.429 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:01.743 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.e55 00:17:01.743 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.e55 00:17:01.743 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.e55 00:17:01.743 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:01.743 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:01.743 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4543e1d032a56182d5fa7c9b292b0736ad16eff7c39bd8c6384cfea1b73a692a 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.MG3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4543e1d032a56182d5fa7c9b292b0736ad16eff7c39bd8c6384cfea1b73a692a 3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4543e1d032a56182d5fa7c9b292b0736ad16eff7c39bd8c6384cfea1b73a692a 3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4543e1d032a56182d5fa7c9b292b0736ad16eff7c39bd8c6384cfea1b73a692a 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.MG3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.MG3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MG3 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d6ee1d3fc64da722431066e1cc735a552243b4780f6e3fe6 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ivu 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d6ee1d3fc64da722431066e1cc735a552243b4780f6e3fe6 0 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d6ee1d3fc64da722431066e1cc735a552243b4780f6e3fe6 0 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d6ee1d3fc64da722431066e1cc735a552243b4780f6e3fe6 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ivu 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ivu 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ivu 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=319e84a78b90d9444ad766725b0eebe35d0c02bfaa0e7d39 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jIg 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 319e84a78b90d9444ad766725b0eebe35d0c02bfaa0e7d39 2 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 319e84a78b90d9444ad766725b0eebe35d0c02bfaa0e7d39 2 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=319e84a78b90d9444ad766725b0eebe35d0c02bfaa0e7d39 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jIg 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jIg 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jIg 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec45a89a18536ded66912a2e97e0d748 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NNz 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec45a89a18536ded66912a2e97e0d748 1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec45a89a18536ded66912a2e97e0d748 1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec45a89a18536ded66912a2e97e0d748 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NNz 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NNz 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.NNz 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:01.744 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=064705885e1202cdc473bc42906de3c0 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Svk 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 064705885e1202cdc473bc42906de3c0 1 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 064705885e1202cdc473bc42906de3c0 1 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=064705885e1202cdc473bc42906de3c0 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Svk 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Svk 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Svk 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9c72c69d161e77987474003da15d3ef0dca211ca766d2b45 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.748 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9c72c69d161e77987474003da15d3ef0dca211ca766d2b45 2 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9c72c69d161e77987474003da15d3ef0dca211ca766d2b45 2 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9c72c69d161e77987474003da15d3ef0dca211ca766d2b45 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:02.018 10:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.748 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.748 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.748 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:02.018 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6635168033e634e59addbdcc72b79064 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.mDJ 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6635168033e634e59addbdcc72b79064 0 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6635168033e634e59addbdcc72b79064 0 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6635168033e634e59addbdcc72b79064 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.mDJ 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.mDJ 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mDJ 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bdfe4705c0c8964dfa140c8ff8e6f9f9b20f941abe386b672cc10103c800cd9e 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.psG 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bdfe4705c0c8964dfa140c8ff8e6f9f9b20f941abe386b672cc10103c800cd9e 3 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bdfe4705c0c8964dfa140c8ff8e6f9f9b20f941abe386b672cc10103c800cd9e 3 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bdfe4705c0c8964dfa140c8ff8e6f9f9b20f941abe386b672cc10103c800cd9e 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.psG 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.psG 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.psG 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78074 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78074 ']' 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.019 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.e55 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MG3 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MG3 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ivu 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jIg ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jIg 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.NNz 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Svk ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Svk 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.748 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mDJ ]] 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mDJ 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.586 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.psG 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:02.587 10:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:02.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.846 Waiting for block devices as requested 00:17:02.846 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.104 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.672 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:03.673 No valid GPT data, bailing 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:03.673 No valid GPT data, bailing 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:03.673 No valid GPT data, bailing 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:03.673 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:03.933 No valid GPT data, bailing 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -a 10.0.0.1 -t tcp -s 4420 00:17:03.933 00:17:03.933 Discovery Log Number of Records 2, Generation counter 2 00:17:03.933 =====Discovery Log Entry 0====== 00:17:03.933 trtype: tcp 00:17:03.933 adrfam: ipv4 00:17:03.933 subtype: current discovery subsystem 00:17:03.933 treq: not specified, sq flow control disable supported 00:17:03.933 portid: 1 00:17:03.933 trsvcid: 4420 00:17:03.933 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:03.933 traddr: 10.0.0.1 00:17:03.933 eflags: none 00:17:03.933 sectype: none 00:17:03.933 =====Discovery Log Entry 1====== 00:17:03.933 trtype: tcp 00:17:03.933 adrfam: ipv4 00:17:03.933 subtype: nvme subsystem 00:17:03.933 treq: not specified, sq flow control disable supported 00:17:03.933 portid: 1 00:17:03.933 trsvcid: 4420 00:17:03.933 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:03.933 traddr: 10.0.0.1 00:17:03.933 eflags: none 00:17:03.933 sectype: none 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.933 10:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.933 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.192 nvme0n1 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.192 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.193 nvme0n1 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.193 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.453 nvme0n1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.453 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.712 nvme0n1 00:17:04.712 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.712 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.712 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.712 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.712 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.712 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.713 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.972 nvme0n1 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.972 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.973 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.973 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.973 10:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.973 nvme0n1 00:17:04.973 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.973 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.973 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.973 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.973 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.973 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.231 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.231 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.232 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.491 nvme0n1 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.491 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.751 nvme0n1 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:05.751 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.752 10:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.011 nvme0n1 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.011 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.012 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.271 nvme0n1 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.271 nvme0n1 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.271 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.530 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.531 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.531 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:06.531 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:06.531 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.531 10:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.099 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.359 nvme0n1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.359 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.619 nvme0n1 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.619 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.620 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.880 nvme0n1 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.880 10:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 nvme0n1 00:17:08.139 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.139 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.139 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.139 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.139 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.139 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.140 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.399 nvme0n1 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.399 10:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.304 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.305 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.305 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.305 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.305 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.305 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.564 nvme0n1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.564 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.133 nvme0n1 00:17:11.133 10:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.133 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.399 nvme0n1 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.399 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.982 nvme0n1 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:11.982 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.983 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.983 10:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.241 nvme0n1 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:12.241 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.242 10:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.177 nvme0n1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.177 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.744 nvme0n1 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.744 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.745 10:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.312 nvme0n1 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.312 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.570 10:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.136 nvme0n1 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.136 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.137 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 nvme0n1 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.716 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.984 nvme0n1 00:17:15.984 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.984 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.984 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.984 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.984 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.984 10:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:15.984 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:15.985 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.985 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.985 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.985 nvme0n1 00:17:15.985 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 nvme0n1 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.244 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.245 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.245 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.245 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.245 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.504 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.505 nvme0n1 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.505 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 nvme0n1 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 nvme0n1 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.765 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.024 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.024 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.024 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.024 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.025 10:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 nvme0n1 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.025 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.285 nvme0n1 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.285 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 nvme0n1 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.547 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.548 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.811 nvme0n1 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.811 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.070 nvme0n1 00:17:18.070 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.070 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.070 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.070 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.070 10:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:18.070 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.071 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.329 nvme0n1 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.329 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.588 nvme0n1 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.588 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.589 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.848 nvme0n1 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.848 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.849 10:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.108 nvme0n1 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.108 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.366 nvme0n1 00:17:19.366 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.366 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.366 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.366 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.366 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.625 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.626 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.626 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.894 nvme0n1 00:17:19.894 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.894 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.894 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.894 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.894 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.894 10:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.894 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.462 nvme0n1 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.462 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.720 nvme0n1 00:17:20.720 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.720 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.720 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.720 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.720 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.720 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.978 10:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.237 nvme0n1 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.237 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.172 nvme0n1 00:17:22.172 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.172 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.172 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.172 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.172 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.172 10:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.172 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.172 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.172 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.172 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.173 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.740 nvme0n1 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.740 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.741 10:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 nvme0n1 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.308 10:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.937 nvme0n1 00:17:23.937 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.937 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.937 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.937 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.937 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.937 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.196 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.197 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.765 nvme0n1 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.765 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.025 nvme0n1 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.025 10:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:25.025 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.026 nvme0n1 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.026 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.286 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.287 nvme0n1 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.287 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.547 nvme0n1 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.547 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.548 nvme0n1 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.548 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.807 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.808 nvme0n1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.808 10:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.068 nvme0n1 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.068 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.327 nvme0n1 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:26.327 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.328 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.587 nvme0n1 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.587 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.588 nvme0n1 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.588 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:26.847 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.848 nvme0n1 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.848 10:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.848 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.107 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.108 nvme0n1 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.108 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.368 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.637 nvme0n1 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.637 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.638 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.639 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.639 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.639 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.639 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.639 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.900 nvme0n1 00:17:27.900 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.900 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.900 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.900 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.901 10:06:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.160 nvme0n1 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:28.160 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.161 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 nvme0n1 00:17:28.420 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.420 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.420 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.420 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.420 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.420 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.679 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.680 10:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.939 nvme0n1 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.939 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 nvme0n1 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.508 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 nvme0n1 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.767 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.032 10:07:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.291 nvme0n1 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ODIzZTFmMTg3ZDExY2Y1MmI3MjY4NDVmOTdmMGViUbes: 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDU0M2UxZDAzMmE1NjE4MmQ1ZmE3YzliMjkyYjA3MzZhZDE2ZWZmN2MzOWJkOGM2Mzg0Y2ZlYTFiNzNhNjkyYT3Pnag=: 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.291 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.858 nvme0n1 00:17:30.858 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.858 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.858 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.858 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.858 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.858 10:07:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.117 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 nvme0n1 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.684 10:07:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 nvme0n1 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWM3MmM2OWQxNjFlNzc5ODc0NzQwMDNkYTE1ZDNlZjBkY2EyMTFjYTc2NmQyYjQ1pzBAgQ==: 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjYzNTE2ODAzM2U2MzRlNTlhZGRiZGNjNzJiNzkwNjRNTfg7: 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.310 10:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.878 nvme0n1 00:17:32.878 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.878 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.878 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.878 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.878 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.136 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmRmZTQ3MDVjMGM4OTY0ZGZhMTQwYzhmZjhlNmY5ZjliMjBmOTQxYWJlMzg2YjY3MmNjMTAxMDNjODAwY2Q5ZSaUnPg=: 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.137 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.704 nvme0n1 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.704 request: 00:17:33.704 { 00:17:33.704 "name": "nvme0", 00:17:33.704 "trtype": "tcp", 00:17:33.704 "traddr": "10.0.0.1", 00:17:33.704 "adrfam": "ipv4", 00:17:33.704 "trsvcid": "4420", 00:17:33.704 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:33.704 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:33.704 "prchk_reftag": false, 00:17:33.704 "prchk_guard": false, 00:17:33.704 "hdgst": false, 00:17:33.704 "ddgst": false, 00:17:33.704 "allow_unrecognized_csi": false, 00:17:33.704 "method": "bdev_nvme_attach_controller", 00:17:33.704 "req_id": 1 00:17:33.704 } 00:17:33.704 Got JSON-RPC error response 00:17:33.704 response: 00:17:33.704 { 00:17:33.704 "code": -5, 00:17:33.704 "message": "Input/output error" 00:17:33.704 } 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.704 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.963 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 request: 00:17:33.964 { 00:17:33.964 "name": "nvme0", 00:17:33.964 "trtype": "tcp", 00:17:33.964 "traddr": "10.0.0.1", 00:17:33.964 "adrfam": "ipv4", 00:17:33.964 "trsvcid": "4420", 00:17:33.964 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:33.964 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:33.964 "prchk_reftag": false, 00:17:33.964 "prchk_guard": false, 00:17:33.964 "hdgst": false, 00:17:33.964 "ddgst": false, 00:17:33.964 "dhchap_key": "key2", 00:17:33.964 "allow_unrecognized_csi": false, 00:17:33.964 "method": "bdev_nvme_attach_controller", 00:17:33.964 "req_id": 1 00:17:33.964 } 00:17:33.964 Got JSON-RPC error response 00:17:33.964 response: 00:17:33.964 { 00:17:33.964 "code": -5, 00:17:33.964 "message": "Input/output error" 00:17:33.964 } 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 10:07:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 request: 00:17:33.964 { 00:17:33.964 "name": "nvme0", 00:17:33.964 "trtype": "tcp", 00:17:33.964 "traddr": "10.0.0.1", 00:17:33.964 "adrfam": "ipv4", 00:17:33.964 "trsvcid": "4420", 00:17:33.964 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:33.964 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:33.964 "prchk_reftag": false, 00:17:33.964 "prchk_guard": false, 00:17:33.964 "hdgst": false, 00:17:33.964 "ddgst": false, 00:17:33.964 "dhchap_key": "key1", 00:17:33.964 "dhchap_ctrlr_key": "ckey2", 00:17:33.964 "allow_unrecognized_csi": false, 00:17:33.964 "method": "bdev_nvme_attach_controller", 00:17:33.964 "req_id": 1 00:17:33.964 } 00:17:33.964 Got JSON-RPC error response 00:17:33.964 response: 00:17:33.964 { 00:17:33.964 "code": -5, 00:17:33.964 "message": "Input/output error" 00:17:33.964 } 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.964 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 nvme0n1 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 request: 00:17:34.224 { 00:17:34.224 "name": "nvme0", 00:17:34.224 "dhchap_key": "key1", 00:17:34.224 "dhchap_ctrlr_key": "ckey2", 00:17:34.224 "method": "bdev_nvme_set_keys", 00:17:34.224 "req_id": 1 00:17:34.224 } 00:17:34.224 Got JSON-RPC error response 00:17:34.224 response: 00:17:34.224 { 00:17:34.224 "code": -13, 00:17:34.224 "message": "Permission denied" 00:17:34.224 } 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.224 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.484 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:34.484 10:07:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDZlZTFkM2ZjNjRkYTcyMjQzMTA2NmUxY2M3MzVhNTUyMjQzYjQ3ODBmNmUzZmU2sQpseA==: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzE5ZTg0YTc4YjkwZDk0NDRhZDc2NjcyNWIwZWViZTM1ZDBjMDJiZmFhMGU3ZDM58Toy7w==: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.422 nvme0n1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NWE4OWExODUzNmRlZDY2OTEyYTJlOTdlMGQ3NDg518Uz: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: ]] 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY0NzA1ODg1ZTEyMDJjZGM0NzNiYzQyOTA2ZGUzYzAlO56K: 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.422 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.422 request: 00:17:35.422 { 00:17:35.422 "name": "nvme0", 00:17:35.422 "dhchap_key": "key2", 00:17:35.422 "dhchap_ctrlr_key": "ckey1", 00:17:35.680 "method": "bdev_nvme_set_keys", 00:17:35.680 "req_id": 1 00:17:35.680 } 00:17:35.680 Got JSON-RPC error response 00:17:35.680 response: 00:17:35.680 { 00:17:35.680 "code": -13, 00:17:35.680 "message": "Permission denied" 00:17:35.680 } 00:17:35.680 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:35.680 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:35.680 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:35.681 10:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:36.614 rmmod nvme_tcp 00:17:36.614 rmmod nvme_fabrics 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78074 ']' 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78074 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78074 ']' 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78074 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:17:36.614 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78074 00:17:36.873 killing process with pid 78074 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78074' 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78074 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78074 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:36.873 10:07:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:36.874 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:36.874 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:36.874 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:37.132 10:07:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:38.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.082 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:38.082 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:38.082 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.e55 /tmp/spdk.key-null.ivu /tmp/spdk.key-sha256.NNz /tmp/spdk.key-sha384.748 /tmp/spdk.key-sha512.psG /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:38.082 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:38.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.341 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:38.341 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:38.599 00:17:38.599 real 0m38.066s 00:17:38.599 user 0m34.534s 00:17:38.599 sys 0m3.880s 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.599 ************************************ 00:17:38.599 END TEST nvmf_auth_host 00:17:38.599 ************************************ 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.599 ************************************ 00:17:38.599 START TEST nvmf_digest 00:17:38.599 ************************************ 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:38.599 * Looking for test storage... 00:17:38.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.599 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:38.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.859 --rc genhtml_branch_coverage=1 00:17:38.859 --rc genhtml_function_coverage=1 00:17:38.859 --rc genhtml_legend=1 00:17:38.859 --rc geninfo_all_blocks=1 00:17:38.859 --rc geninfo_unexecuted_blocks=1 00:17:38.859 00:17:38.859 ' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:38.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.859 --rc genhtml_branch_coverage=1 00:17:38.859 --rc genhtml_function_coverage=1 00:17:38.859 --rc genhtml_legend=1 00:17:38.859 --rc geninfo_all_blocks=1 00:17:38.859 --rc geninfo_unexecuted_blocks=1 00:17:38.859 00:17:38.859 ' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:38.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.859 --rc genhtml_branch_coverage=1 00:17:38.859 --rc genhtml_function_coverage=1 00:17:38.859 --rc genhtml_legend=1 00:17:38.859 --rc geninfo_all_blocks=1 00:17:38.859 --rc geninfo_unexecuted_blocks=1 00:17:38.859 00:17:38.859 ' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:38.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.859 --rc genhtml_branch_coverage=1 00:17:38.859 --rc genhtml_function_coverage=1 00:17:38.859 --rc genhtml_legend=1 00:17:38.859 --rc geninfo_all_blocks=1 00:17:38.859 --rc geninfo_unexecuted_blocks=1 00:17:38.859 00:17:38.859 ' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:38.859 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:38.860 Cannot find device "nvmf_init_br" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:38.860 Cannot find device "nvmf_init_br2" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:38.860 Cannot find device "nvmf_tgt_br" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:38.860 Cannot find device "nvmf_tgt_br2" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:38.860 Cannot find device "nvmf_init_br" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:38.860 Cannot find device "nvmf_init_br2" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:38.860 Cannot find device "nvmf_tgt_br" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:38.860 Cannot find device "nvmf_tgt_br2" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:38.860 Cannot find device "nvmf_br" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:38.860 Cannot find device "nvmf_init_if" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:38.860 Cannot find device "nvmf_init_if2" 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:38.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:38.860 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:38.860 10:07:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:38.860 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:38.860 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:38.860 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:39.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:39.120 00:17:39.120 --- 10.0.0.3 ping statistics --- 00:17:39.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.120 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:39.120 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:39.120 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:39.120 00:17:39.120 --- 10.0.0.4 ping statistics --- 00:17:39.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.120 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:39.120 00:17:39.120 --- 10.0.0.1 ping statistics --- 00:17:39.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.120 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:39.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:17:39.120 00:17:39.120 --- 10.0.0.2 ping statistics --- 00:17:39.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.120 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:39.120 ************************************ 00:17:39.120 START TEST nvmf_digest_clean 00:17:39.120 ************************************ 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79727 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79727 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79727 ']' 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:39.120 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.120 [2024-11-04 10:07:11.285508] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:39.120 [2024-11-04 10:07:11.285628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.379 [2024-11-04 10:07:11.438019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.379 [2024-11-04 10:07:11.505281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.379 [2024-11-04 10:07:11.505361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.379 [2024-11-04 10:07:11.505375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.379 [2024-11-04 10:07:11.505386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.379 [2024-11-04 10:07:11.505394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.379 [2024-11-04 10:07:11.505872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.379 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.379 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:39.379 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.379 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:39.379 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.638 [2024-11-04 10:07:11.642898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.638 null0 00:17:39.638 [2024-11-04 10:07:11.697030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.638 [2024-11-04 10:07:11.721155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79746 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79746 /var/tmp/bperf.sock 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79746 ']' 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:39.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:39.638 10:07:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.638 [2024-11-04 10:07:11.781012] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:39.638 [2024-11-04 10:07:11.781118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79746 ] 00:17:39.897 [2024-11-04 10:07:11.927549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.897 [2024-11-04 10:07:11.992031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.897 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.897 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:39.897 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:39.897 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:39.897 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:40.464 [2024-11-04 10:07:12.340473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.464 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.464 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.724 nvme0n1 00:17:40.724 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:40.724 10:07:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.982 Running I/O for 2 seconds... 00:17:42.852 14097.00 IOPS, 55.07 MiB/s [2024-11-04T10:07:15.022Z] 14541.50 IOPS, 56.80 MiB/s 00:17:42.852 Latency(us) 00:17:42.852 [2024-11-04T10:07:15.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.852 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:42.852 nvme0n1 : 2.01 14554.45 56.85 0.00 0.00 8786.96 7506.85 24546.21 00:17:42.852 [2024-11-04T10:07:15.022Z] =================================================================================================================== 00:17:42.852 [2024-11-04T10:07:15.022Z] Total : 14554.45 56.85 0.00 0.00 8786.96 7506.85 24546.21 00:17:42.852 { 00:17:42.852 "results": [ 00:17:42.852 { 00:17:42.852 "job": "nvme0n1", 00:17:42.852 "core_mask": "0x2", 00:17:42.852 "workload": "randread", 00:17:42.852 "status": "finished", 00:17:42.852 "queue_depth": 128, 00:17:42.852 "io_size": 4096, 00:17:42.852 "runtime": 2.007015, 00:17:42.852 "iops": 14554.450265693082, 00:17:42.852 "mibps": 56.8533213503636, 00:17:42.852 "io_failed": 0, 00:17:42.852 "io_timeout": 0, 00:17:42.852 "avg_latency_us": 8786.958337612543, 00:17:42.852 "min_latency_us": 7506.850909090909, 00:17:42.852 "max_latency_us": 24546.21090909091 00:17:42.852 } 00:17:42.852 ], 00:17:42.852 "core_count": 1 00:17:42.852 } 00:17:42.852 10:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:42.852 10:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:42.852 10:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:42.852 | select(.opcode=="crc32c") 00:17:42.852 | "\(.module_name) \(.executed)"' 00:17:42.852 10:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:42.852 10:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79746 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79746 ']' 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79746 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79746 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:43.111 killing process with pid 79746 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79746' 00:17:43.111 Received shutdown signal, test time was about 2.000000 seconds 00:17:43.111 00:17:43.111 Latency(us) 00:17:43.111 [2024-11-04T10:07:15.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.111 [2024-11-04T10:07:15.281Z] =================================================================================================================== 00:17:43.111 [2024-11-04T10:07:15.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79746 00:17:43.111 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79746 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79799 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79799 /var/tmp/bperf.sock 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79799 ']' 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:43.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:43.370 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:43.370 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:43.370 Zero copy mechanism will not be used. 00:17:43.370 [2024-11-04 10:07:15.520930] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:43.370 [2024-11-04 10:07:15.521059] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79799 ] 00:17:43.629 [2024-11-04 10:07:15.669104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.629 [2024-11-04 10:07:15.729020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.629 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:43.629 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:43.629 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:43.629 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:43.629 10:07:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:44.196 [2024-11-04 10:07:16.138768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.196 10:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.196 10:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.456 nvme0n1 00:17:44.456 10:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:44.456 10:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.714 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.714 Zero copy mechanism will not be used. 00:17:44.714 Running I/O for 2 seconds... 00:17:46.605 7680.00 IOPS, 960.00 MiB/s [2024-11-04T10:07:18.775Z] 7816.00 IOPS, 977.00 MiB/s 00:17:46.605 Latency(us) 00:17:46.605 [2024-11-04T10:07:18.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.605 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:46.605 nvme0n1 : 2.00 7811.79 976.47 0.00 0.00 2045.13 1772.45 9830.40 00:17:46.605 [2024-11-04T10:07:18.775Z] =================================================================================================================== 00:17:46.605 [2024-11-04T10:07:18.775Z] Total : 7811.79 976.47 0.00 0.00 2045.13 1772.45 9830.40 00:17:46.606 { 00:17:46.606 "results": [ 00:17:46.606 { 00:17:46.606 "job": "nvme0n1", 00:17:46.606 "core_mask": "0x2", 00:17:46.606 "workload": "randread", 00:17:46.606 "status": "finished", 00:17:46.606 "queue_depth": 16, 00:17:46.606 "io_size": 131072, 00:17:46.606 "runtime": 2.003125, 00:17:46.606 "iops": 7811.794071762871, 00:17:46.606 "mibps": 976.4742589703588, 00:17:46.606 "io_failed": 0, 00:17:46.606 "io_timeout": 0, 00:17:46.606 "avg_latency_us": 2045.1273173452312, 00:17:46.606 "min_latency_us": 1772.4509090909091, 00:17:46.606 "max_latency_us": 9830.4 00:17:46.606 } 00:17:46.606 ], 00:17:46.606 "core_count": 1 00:17:46.606 } 00:17:46.606 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:46.606 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:46.606 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:46.606 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:46.606 | select(.opcode=="crc32c") 00:17:46.606 | "\(.module_name) \(.executed)"' 00:17:46.606 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79799 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79799 ']' 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79799 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79799 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:46.865 killing process with pid 79799 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79799' 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79799 00:17:46.865 Received shutdown signal, test time was about 2.000000 seconds 00:17:46.865 00:17:46.865 Latency(us) 00:17:46.865 [2024-11-04T10:07:19.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.865 [2024-11-04T10:07:19.035Z] =================================================================================================================== 00:17:46.865 [2024-11-04T10:07:19.035Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:46.865 10:07:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79799 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79852 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79852 /var/tmp/bperf.sock 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79852 ']' 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:47.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:47.124 10:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.124 [2024-11-04 10:07:19.260723] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:47.124 [2024-11-04 10:07:19.260831] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79852 ] 00:17:47.383 [2024-11-04 10:07:19.409042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.383 [2024-11-04 10:07:19.475929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.321 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:48.321 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:48.321 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:48.321 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:48.321 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:48.580 [2024-11-04 10:07:20.580065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.580 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.580 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.839 nvme0n1 00:17:48.839 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:48.839 10:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:49.097 Running I/O for 2 seconds... 00:17:50.990 16130.00 IOPS, 63.01 MiB/s [2024-11-04T10:07:23.160Z] 16193.00 IOPS, 63.25 MiB/s 00:17:50.990 Latency(us) 00:17:50.990 [2024-11-04T10:07:23.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.990 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.990 nvme0n1 : 2.01 16181.05 63.21 0.00 0.00 7903.47 4676.89 16086.11 00:17:50.990 [2024-11-04T10:07:23.160Z] =================================================================================================================== 00:17:50.990 [2024-11-04T10:07:23.160Z] Total : 16181.05 63.21 0.00 0.00 7903.47 4676.89 16086.11 00:17:50.990 { 00:17:50.990 "results": [ 00:17:50.990 { 00:17:50.990 "job": "nvme0n1", 00:17:50.990 "core_mask": "0x2", 00:17:50.990 "workload": "randwrite", 00:17:50.990 "status": "finished", 00:17:50.990 "queue_depth": 128, 00:17:50.990 "io_size": 4096, 00:17:50.990 "runtime": 2.009388, 00:17:50.990 "iops": 16181.046169281393, 00:17:50.990 "mibps": 63.20721159875544, 00:17:50.990 "io_failed": 0, 00:17:50.990 "io_timeout": 0, 00:17:50.990 "avg_latency_us": 7903.474871244274, 00:17:50.990 "min_latency_us": 4676.887272727273, 00:17:50.990 "max_latency_us": 16086.10909090909 00:17:50.990 } 00:17:50.990 ], 00:17:50.990 "core_count": 1 00:17:50.990 } 00:17:50.990 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:50.990 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:50.990 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:50.990 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.990 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:50.990 | select(.opcode=="crc32c") 00:17:50.990 | "\(.module_name) \(.executed)"' 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79852 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79852 ']' 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79852 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79852 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:51.249 killing process with pid 79852 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79852' 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79852 00:17:51.249 Received shutdown signal, test time was about 2.000000 seconds 00:17:51.249 00:17:51.249 Latency(us) 00:17:51.249 [2024-11-04T10:07:23.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.249 [2024-11-04T10:07:23.419Z] =================================================================================================================== 00:17:51.249 [2024-11-04T10:07:23.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.249 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79852 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79913 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79913 /var/tmp/bperf.sock 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79913 ']' 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:51.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:51.508 10:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:51.508 Zero copy mechanism will not be used. 00:17:51.508 [2024-11-04 10:07:23.661971] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:51.508 [2024-11-04 10:07:23.662055] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79913 ] 00:17:51.767 [2024-11-04 10:07:23.806875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.767 [2024-11-04 10:07:23.864262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.709 10:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:52.709 10:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:52.709 10:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:52.709 10:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:52.709 10:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:52.986 [2024-11-04 10:07:24.958142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.986 10:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.986 10:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.245 nvme0n1 00:17:53.245 10:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:53.245 10:07:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:53.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.503 Zero copy mechanism will not be used. 00:17:53.503 Running I/O for 2 seconds... 00:17:55.376 6337.00 IOPS, 792.12 MiB/s [2024-11-04T10:07:27.546Z] 6368.50 IOPS, 796.06 MiB/s 00:17:55.376 Latency(us) 00:17:55.376 [2024-11-04T10:07:27.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.376 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:55.376 nvme0n1 : 2.00 6366.66 795.83 0.00 0.00 2507.59 1899.05 9055.88 00:17:55.376 [2024-11-04T10:07:27.546Z] =================================================================================================================== 00:17:55.376 [2024-11-04T10:07:27.546Z] Total : 6366.66 795.83 0.00 0.00 2507.59 1899.05 9055.88 00:17:55.376 { 00:17:55.376 "results": [ 00:17:55.376 { 00:17:55.376 "job": "nvme0n1", 00:17:55.376 "core_mask": "0x2", 00:17:55.376 "workload": "randwrite", 00:17:55.376 "status": "finished", 00:17:55.376 "queue_depth": 16, 00:17:55.376 "io_size": 131072, 00:17:55.376 "runtime": 2.003091, 00:17:55.376 "iops": 6366.660326465448, 00:17:55.376 "mibps": 795.832540808181, 00:17:55.376 "io_failed": 0, 00:17:55.376 "io_timeout": 0, 00:17:55.376 "avg_latency_us": 2507.59389861922, 00:17:55.376 "min_latency_us": 1899.0545454545454, 00:17:55.376 "max_latency_us": 9055.883636363636 00:17:55.376 } 00:17:55.376 ], 00:17:55.376 "core_count": 1 00:17:55.376 } 00:17:55.376 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:55.376 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:55.376 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:55.376 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:55.376 | select(.opcode=="crc32c") 00:17:55.376 | "\(.module_name) \(.executed)"' 00:17:55.376 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79913 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79913 ']' 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79913 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79913 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:55.636 killing process with pid 79913 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79913' 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79913 00:17:55.636 Received shutdown signal, test time was about 2.000000 seconds 00:17:55.636 00:17:55.636 Latency(us) 00:17:55.636 [2024-11-04T10:07:27.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.636 [2024-11-04T10:07:27.806Z] =================================================================================================================== 00:17:55.636 [2024-11-04T10:07:27.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.636 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79913 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79727 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79727 ']' 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79727 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79727 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:55.932 killing process with pid 79727 00:17:55.932 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79727' 00:17:55.933 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79727 00:17:55.933 10:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79727 00:17:56.193 00:17:56.193 real 0m16.973s 00:17:56.193 user 0m33.567s 00:17:56.193 sys 0m4.555s 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:56.193 ************************************ 00:17:56.193 END TEST nvmf_digest_clean 00:17:56.193 ************************************ 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:56.193 ************************************ 00:17:56.193 START TEST nvmf_digest_error 00:17:56.193 ************************************ 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79996 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79996 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79996 ']' 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:56.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:56.193 10:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.193 [2024-11-04 10:07:28.306117] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:56.193 [2024-11-04 10:07:28.306234] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.451 [2024-11-04 10:07:28.455585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.451 [2024-11-04 10:07:28.523123] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.451 [2024-11-04 10:07:28.523195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.451 [2024-11-04 10:07:28.523209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.451 [2024-11-04 10:07:28.523220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.451 [2024-11-04 10:07:28.523229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.451 [2024-11-04 10:07:28.523703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.387 [2024-11-04 10:07:29.328327] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.387 [2024-11-04 10:07:29.393103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.387 null0 00:17:57.387 [2024-11-04 10:07:29.446157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.387 [2024-11-04 10:07:29.470453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80034 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80034 /var/tmp/bperf.sock 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80034 ']' 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:57.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:57.387 10:07:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.387 [2024-11-04 10:07:29.528837] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:17:57.387 [2024-11-04 10:07:29.528922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80034 ] 00:17:57.656 [2024-11-04 10:07:29.675287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.656 [2024-11-04 10:07:29.735975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.656 [2024-11-04 10:07:29.790305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:58.591 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:58.591 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:58.591 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.591 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.865 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:58.865 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.865 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.865 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.865 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.865 10:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.123 nvme0n1 00:17:59.123 10:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:59.123 10:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.123 10:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.123 10:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.123 10:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:59.123 10:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:59.382 Running I/O for 2 seconds... 00:17:59.382 [2024-11-04 10:07:31.372057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.372110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.372124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.389528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.389583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.389596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.407284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.407320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.407333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.424738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.424776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.424790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.442239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.442290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.442302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.459685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.459722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.459734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.477122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.477172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.477184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.494423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.494474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.494486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.512018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.512056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.512068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.529993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.530046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.530058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.382 [2024-11-04 10:07:31.547561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.382 [2024-11-04 10:07:31.547620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.382 [2024-11-04 10:07:31.547632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.565281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.565330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.565341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.582674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.582724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.582737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.599935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.599971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.599984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.617291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.617340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.617352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.634980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.635017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.635030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.652570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.652643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.652655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.669895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.669934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.641 [2024-11-04 10:07:31.669946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.641 [2024-11-04 10:07:31.687741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.641 [2024-11-04 10:07:31.687778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.687790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.705436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.705486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.705498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.723154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.723204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.723217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.741094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.741132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.741144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.758513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.758563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.758575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.776102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.776138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.776150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.793541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.793590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.793627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.642 [2024-11-04 10:07:31.811093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.642 [2024-11-04 10:07:31.811140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.642 [2024-11-04 10:07:31.811151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.828593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.828649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.828678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.846281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.846330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.846342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.863613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.863689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.863702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.880907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.880943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.880955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.898250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.898297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.898308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.915537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.915586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.915598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.933207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.933256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.933268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.950767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.950801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.950813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.968162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.968225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.968237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:31.985632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:31.985693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.901 [2024-11-04 10:07:31.985705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.901 [2024-11-04 10:07:32.003292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.901 [2024-11-04 10:07:32.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.902 [2024-11-04 10:07:32.003353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.902 [2024-11-04 10:07:32.020976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.902 [2024-11-04 10:07:32.021013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.902 [2024-11-04 10:07:32.021026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.902 [2024-11-04 10:07:32.039196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.902 [2024-11-04 10:07:32.039245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.902 [2024-11-04 10:07:32.039257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.902 [2024-11-04 10:07:32.056579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:17:59.902 [2024-11-04 10:07:32.056635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.902 [2024-11-04 10:07:32.056663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.074234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.074281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.160 [2024-11-04 10:07:32.074293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.091705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.091740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.160 [2024-11-04 10:07:32.091753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.109146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.109194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.160 [2024-11-04 10:07:32.109205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.126562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.126636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.160 [2024-11-04 10:07:32.126665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.144004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.144039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.160 [2024-11-04 10:07:32.144052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.161211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.161258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.160 [2024-11-04 10:07:32.161269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.160 [2024-11-04 10:07:32.178421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.160 [2024-11-04 10:07:32.178468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.178480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.196302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.196350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.196361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.213553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.213626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.213655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.230983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.231018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.231030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.248316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.248363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.248375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.265930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.265966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.265978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.283378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.283415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.283427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.301036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.301071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.301083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.161 [2024-11-04 10:07:32.318565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.161 [2024-11-04 10:07:32.318641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.161 [2024-11-04 10:07:32.318655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.336289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.336338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.336350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 14422.00 IOPS, 56.34 MiB/s [2024-11-04T10:07:32.590Z] [2024-11-04 10:07:32.353786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.353822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.353835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.371201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.371252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.371264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.388911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.388947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.388960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.406549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.406598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.406619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.424109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.424145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.424162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.441659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.441713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.441726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.458995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.459060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.459087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.484153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.484233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.484245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.501880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.501916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.501928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.519601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.519681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.538193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.538258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.538270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.556076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.556112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.556125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.420 [2024-11-04 10:07:32.573537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.420 [2024-11-04 10:07:32.573587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.420 [2024-11-04 10:07:32.573599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.679 [2024-11-04 10:07:32.591139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.679 [2024-11-04 10:07:32.591195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.679 [2024-11-04 10:07:32.591208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.679 [2024-11-04 10:07:32.608534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.679 [2024-11-04 10:07:32.608583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.679 [2024-11-04 10:07:32.608594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.679 [2024-11-04 10:07:32.625836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.679 [2024-11-04 10:07:32.625887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.679 [2024-11-04 10:07:32.625899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.679 [2024-11-04 10:07:32.643238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.679 [2024-11-04 10:07:32.643288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.679 [2024-11-04 10:07:32.643299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.679 [2024-11-04 10:07:32.660742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.679 [2024-11-04 10:07:32.660779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.679 [2024-11-04 10:07:32.660792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.679 [2024-11-04 10:07:32.678205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.679 [2024-11-04 10:07:32.678254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.679 [2024-11-04 10:07:32.678266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.695434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.695483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.695494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.712610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.712686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.712699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.729904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.729955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.729967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.747129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.747178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.747189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.764319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.764367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.764378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.781881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.781916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.781929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.799402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.799452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.799464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.816933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.816969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.816981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.680 [2024-11-04 10:07:32.834333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.680 [2024-11-04 10:07:32.834383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.680 [2024-11-04 10:07:32.834395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.938 [2024-11-04 10:07:32.851999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.938 [2024-11-04 10:07:32.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.938 [2024-11-04 10:07:32.852046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.938 [2024-11-04 10:07:32.869481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.938 [2024-11-04 10:07:32.869534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.938 [2024-11-04 10:07:32.869546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.938 [2024-11-04 10:07:32.886981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.938 [2024-11-04 10:07:32.887019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.938 [2024-11-04 10:07:32.887031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.938 [2024-11-04 10:07:32.904386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.938 [2024-11-04 10:07:32.904437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.938 [2024-11-04 10:07:32.904449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:32.922045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:32.922095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:32.922108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:32.939540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:32.939591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:32.939628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:32.957082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:32.957133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:32.957146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:32.974510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:32.974560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:32.974572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:32.992093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:32.992159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:32.992186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:33.009622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:33.009667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:33.009680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:33.027057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:33.027108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:33.027121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:33.044709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:33.044743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:33.044756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:33.062127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:33.062164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:33.062176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:33.079561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:33.079636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:33.079649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.939 [2024-11-04 10:07:33.097163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:00.939 [2024-11-04 10:07:33.097215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.939 [2024-11-04 10:07:33.097227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.114834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.114871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.114883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.132420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.132469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.149826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.149874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.149887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.167350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.167401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.167414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.184961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.184996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.185009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.202492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.202543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.202555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.219988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.220036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.237433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.237495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.254964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.254997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.255009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.272474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.272523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.272535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.289975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.290010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.290022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.307504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.307552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.307564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.325105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.325153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.325165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 [2024-11-04 10:07:33.342635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ba410) 00:18:01.198 [2024-11-04 10:07:33.342685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.198 [2024-11-04 10:07:33.342697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.198 14421.50 IOPS, 56.33 MiB/s 00:18:01.198 Latency(us) 00:18:01.198 [2024-11-04T10:07:33.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.198 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:01.198 nvme0n1 : 2.01 14448.10 56.44 0.00 0.00 8852.40 8102.63 34078.72 00:18:01.198 [2024-11-04T10:07:33.368Z] =================================================================================================================== 00:18:01.198 [2024-11-04T10:07:33.368Z] Total : 14448.10 56.44 0.00 0.00 8852.40 8102.63 34078.72 00:18:01.198 { 00:18:01.198 "results": [ 00:18:01.198 { 00:18:01.198 "job": "nvme0n1", 00:18:01.198 "core_mask": "0x2", 00:18:01.198 "workload": "randread", 00:18:01.198 "status": "finished", 00:18:01.198 "queue_depth": 128, 00:18:01.198 "io_size": 4096, 00:18:01.198 "runtime": 2.005177, 00:18:01.198 "iops": 14448.101090327687, 00:18:01.198 "mibps": 56.43789488409253, 00:18:01.198 "io_failed": 0, 00:18:01.198 "io_timeout": 0, 00:18:01.198 "avg_latency_us": 8852.402480474204, 00:18:01.198 "min_latency_us": 8102.632727272728, 00:18:01.198 "max_latency_us": 34078.72 00:18:01.198 } 00:18:01.198 ], 00:18:01.198 "core_count": 1 00:18:01.198 } 00:18:01.457 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:01.457 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:01.457 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:01.457 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:01.457 | .driver_specific 00:18:01.457 | .nvme_error 00:18:01.457 | .status_code 00:18:01.457 | .command_transient_transport_error' 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80034 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80034 ']' 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80034 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80034 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:01.715 killing process with pid 80034 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80034' 00:18:01.715 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.715 00:18:01.715 Latency(us) 00:18:01.715 [2024-11-04T10:07:33.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.715 [2024-11-04T10:07:33.885Z] =================================================================================================================== 00:18:01.715 [2024-11-04T10:07:33.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80034 00:18:01.715 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80034 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80094 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80094 /var/tmp/bperf.sock 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80094 ']' 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.005 10:07:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.005 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:02.005 Zero copy mechanism will not be used. 00:18:02.005 [2024-11-04 10:07:33.987934] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:18:02.005 [2024-11-04 10:07:33.988023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80094 ] 00:18:02.005 [2024-11-04 10:07:34.129541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.263 [2024-11-04 10:07:34.190038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.263 [2024-11-04 10:07:34.246169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.197 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.455 nvme0n1 00:18:03.714 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:03.714 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.714 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.714 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.714 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:03.714 10:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.714 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.714 Zero copy mechanism will not be used. 00:18:03.714 Running I/O for 2 seconds... 00:18:03.714 [2024-11-04 10:07:35.757093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.757147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.757178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.761536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.761575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.761619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.765850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.765889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.765918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.770038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.770092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.770120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.774211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.774250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.774278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.778295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.778334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.778362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.782427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.782465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.782494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.786874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.786930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.786944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.791396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.791437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.791466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.795990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.796272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.796290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.800911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.800955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.800985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.714 [2024-11-04 10:07:35.805316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.714 [2024-11-04 10:07:35.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.714 [2024-11-04 10:07:35.805384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.809960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.810002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.810016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.814356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.814395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.814424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.818781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.818820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.818834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.823260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.823299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.823328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.827698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.827738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.827751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.832054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.832094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.832107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.836488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.836529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.836543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.840940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.840981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.840995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.845393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.845432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.845460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.849737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.849776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.849805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.854258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.854299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.858725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.858764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.858778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.863084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.863122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.863151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.867532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.867574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.867604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.871883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.871923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.871936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.876080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.876121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.876134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.715 [2024-11-04 10:07:35.880439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.715 [2024-11-04 10:07:35.880480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.715 [2024-11-04 10:07:35.880509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.884928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.884968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.884996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.889312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.889534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.893940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.893981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.893996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.898339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.898378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.898407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.902610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.902666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.902681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.906844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.906883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.906912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.911179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.911218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.911248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.915588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.915639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.915654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.920143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.920198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.920227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.924523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.924562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.924591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.928883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.928922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.928951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.933374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.933414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.933443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.937785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.937840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.937853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.942085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.942124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.942153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.946353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.946391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.946419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.950633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.950671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.950686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.954978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.955061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.955090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.975 [2024-11-04 10:07:35.959423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.975 [2024-11-04 10:07:35.959459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.975 [2024-11-04 10:07:35.959487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.963864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.963903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.963916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.968244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.968283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.968312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.972679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.972717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.972747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.977076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.977114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.977143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.981411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.981447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.981475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.985735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.985773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.985803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.990105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.990143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.990171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.994824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.994862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.994890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:35.999398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:35.999436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:35.999466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.003664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.003730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.003759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.007808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.007872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.007886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.012081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.012152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.012165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.016231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.016267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.016295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.020455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.020491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.020520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.024742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.024780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.024809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.029163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.029214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.029243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.033728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.033765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.033794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.038116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.038153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.038181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.042443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.042481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.042510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.046746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.046784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.046814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.051182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.051220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.051249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.055767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.055805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.055827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.059973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.060012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.060025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.064444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.064484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.064512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.069175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.069359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.069377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.073981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.074053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.074082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.976 [2024-11-04 10:07:36.078705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.976 [2024-11-04 10:07:36.078744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.976 [2024-11-04 10:07:36.078758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.083307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.083355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.083385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.088251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.088487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.088505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.093116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.093155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.093199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.097823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.097861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.097891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.102662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.102746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.102761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.107252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.107427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.107445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.111951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.111991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.112005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.116399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.116454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.116484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.120976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.121013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.121042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.125821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.125859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.125889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.130242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.130296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.130326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.134872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.134911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.134924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.977 [2024-11-04 10:07:36.139442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:03.977 [2024-11-04 10:07:36.139480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.977 [2024-11-04 10:07:36.139509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.144153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.144224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.144252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.148885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.148925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.153456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.153495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.153525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.157978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.158016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.158030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.162675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.162722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.162737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.167146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.167183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.167213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.171963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.172128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.172145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.176768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.176808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.176822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.181337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.181391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.181421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.186094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.186148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.186177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.190740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.190790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.190803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.195397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.195436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.195464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.200123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.200164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.200179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.204646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.204684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.204697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.209330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.209370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.209384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.213977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.214140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.214157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.218769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.218824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.218838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.237 [2024-11-04 10:07:36.223423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.237 [2024-11-04 10:07:36.223492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.237 [2024-11-04 10:07:36.223521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.228096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.228136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.228150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.232590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.232677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.232692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.237283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.237458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.237477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.242255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.242293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.242340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.246904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.246944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.246958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.251627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.251710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.251726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.256374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.256412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.256442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.260910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.261101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.261119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.265747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.265789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.265803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.270342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.270382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.270395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.274920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.274959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.274989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.279615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.279668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.279682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.284292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.284510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.284527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.289323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.289363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.289393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.293925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.293964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.293978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.298549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.298633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.298648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.303334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.303504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.303522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.308251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.308292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.308305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.312848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.312887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.312901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.317486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.317526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.317539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.322125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.322354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.322371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.327063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.327104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.327118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.331582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.331633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.331646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.336194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.336260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.336290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.340981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.341143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.341161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.345801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.345841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.345854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.238 [2024-11-04 10:07:36.350494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.238 [2024-11-04 10:07:36.350535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.238 [2024-11-04 10:07:36.350564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.355181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.355219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.355249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.359936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.360095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.360112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.364783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.364851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.369519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.369561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.369574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.374180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.374229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.374243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.378625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.378662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.378692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.383240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.383279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.383292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.387876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.387914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.387928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.392545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.392584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.392629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.397231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.397271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.397300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.239 [2024-11-04 10:07:36.401912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.239 [2024-11-04 10:07:36.401967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.239 [2024-11-04 10:07:36.401980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.498 [2024-11-04 10:07:36.406506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.498 [2024-11-04 10:07:36.406560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.498 [2024-11-04 10:07:36.406588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.498 [2024-11-04 10:07:36.411146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.498 [2024-11-04 10:07:36.411185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.498 [2024-11-04 10:07:36.411214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.498 [2024-11-04 10:07:36.415807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.415870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.415884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.420571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.420640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.420670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.425182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.425350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.425383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.429944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.429984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.430014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.434652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.434730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.434746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.439097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.439135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.439164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.443573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.443637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.443668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.448367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.448558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.448576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.453057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.453099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.453112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.457764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.457805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.457818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.462388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.462443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.462473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.467108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.467165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.467179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.471836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.471874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.471888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.476411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.476453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.476466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.480859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.480904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.480918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.485438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.485511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.490045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.490085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.490114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.494676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.494715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.494728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.499189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.499232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.499245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.503902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.503941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.503954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.508279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.508319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.508334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.512747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.512785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.512798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.517053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.517091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.517105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.521410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.521452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.521466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.526604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.526641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.526654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.531411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.531449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.531478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.535962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.499 [2024-11-04 10:07:36.536000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.499 [2024-11-04 10:07:36.536013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.499 [2024-11-04 10:07:36.540716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.540755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.540769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.545257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.545307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.545336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.549870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.550048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.550066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.554622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.554704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.554718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.559245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.559283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.559313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.563687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.563748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.563777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.568338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.568390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.568419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.572957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.573148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.573165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.577787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.577826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.577855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.582576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.582644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.582659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.587330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.587372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.587385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.591794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.591842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.591860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.596267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.596308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.596321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.600886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.600926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.600939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.605611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.605688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.605703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.610323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.610379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.610408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.615035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.615194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.615212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.619809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.619858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.619873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.624357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.624410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.624422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.628804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.628843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.628857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.633131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.633171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.633184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.637528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.637568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.637582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.641998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.642163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.642181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.646714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.646755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.646769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.651093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.651134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.651147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.655554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.655607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.655622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.500 [2024-11-04 10:07:36.660032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.500 [2024-11-04 10:07:36.660074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.500 [2024-11-04 10:07:36.660088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.501 [2024-11-04 10:07:36.664503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.501 [2024-11-04 10:07:36.664574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.501 [2024-11-04 10:07:36.664635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.759 [2024-11-04 10:07:36.669166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.759 [2024-11-04 10:07:36.669326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.759 [2024-11-04 10:07:36.669344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.759 [2024-11-04 10:07:36.674030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.759 [2024-11-04 10:07:36.674072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.759 [2024-11-04 10:07:36.674086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.759 [2024-11-04 10:07:36.678875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.759 [2024-11-04 10:07:36.678917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.759 [2024-11-04 10:07:36.678931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.759 [2024-11-04 10:07:36.683658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.759 [2024-11-04 10:07:36.683715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.759 [2024-11-04 10:07:36.683730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.759 [2024-11-04 10:07:36.688481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.759 [2024-11-04 10:07:36.688700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.688718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.693470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.693557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.693586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.698348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.698389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.698401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.703017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.703073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.703087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.707665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.707732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.707747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.712129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.712335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.712354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.716964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.717020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.717050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.721457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.721496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.721526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.726193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.726248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.726262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.730861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.730900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.730929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.735438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.735492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.735521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.740056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.740095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.740108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.744881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.744920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.744949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.749530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.749571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.749617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 6758.00 IOPS, 844.75 MiB/s [2024-11-04T10:07:36.930Z] [2024-11-04 10:07:36.755666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.755735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.755750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.760263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.760302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.760332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.765090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.765142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.765172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.769695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.769766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.769781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.774211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.774397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.774415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.779172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.779226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.779255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.783927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.783966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.783979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.788522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.788561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.788574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.793179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.793395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.793414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.798150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.798189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.798219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.802644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.802696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.802725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.807277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.807332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.807361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.760 [2024-11-04 10:07:36.811756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.760 [2024-11-04 10:07:36.811799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.760 [2024-11-04 10:07:36.811853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.816434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.816471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.816500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.821042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.821080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.821093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.825474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.825526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.825554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.829994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.830061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.830091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.834771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.834807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.834852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.839482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.839536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.839564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.844133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.844173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.844186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.849025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.849065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.849093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.853742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.853794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.853823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.858391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.858446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.858476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.863056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.863094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.863107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.867815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.867881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.867895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.872504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.872542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.872571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.877052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.877241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.877259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.881943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.881983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.881996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.886580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.886636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.886666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.891230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.891269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.891298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.895749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.895797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.895833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.900533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.900573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.900635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.905255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.905293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.905321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.909756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.909793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.909821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.914226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.914265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.914293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.918719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.918757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.918787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.923221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.923260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.923288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.761 [2024-11-04 10:07:36.927699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:04.761 [2024-11-04 10:07:36.927734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.761 [2024-11-04 10:07:36.927763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.932112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.932151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.932165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.936732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.936770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.936798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.941319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.941358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.941387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.946108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.946282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.946300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.950782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.950820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.950849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.955227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.955266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.955295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.959890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.959929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.959942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.964366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.964420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.964449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.968864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.968916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.968945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.973510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.973548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.973577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.977946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.978001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.978015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.982590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.982673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.982687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.987099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.987167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.987196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.991882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.992042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.992060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:36.996536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:36.996618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:36.996633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:37.001041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:37.001080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:37.001109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:37.005688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:37.005739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:37.005754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:37.009976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:37.010015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.019 [2024-11-04 10:07:37.010028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.019 [2024-11-04 10:07:37.014374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.019 [2024-11-04 10:07:37.014415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.014428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.018712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.018752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.018767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.023257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.023418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.023436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.027766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.027806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.027828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.032200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.032241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.032255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.036555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.036610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.036626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.040799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.040838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.040852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.045158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.045214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.045243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.049637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.049677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.049690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.054061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.054232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.054249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.058675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.058715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.058729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.063186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.063226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.063239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.067476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.067517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.067531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.071875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.071914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.071928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.076426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.076465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.076479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.080860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.080900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.080914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.085189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.085230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.085244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.089490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.089530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.089543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.093743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.093783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.093796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.097975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.098014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.098027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.102376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.102417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.102430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.106758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.106797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.106810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.111086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.111126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.111140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.115486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.115526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.115539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.119832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.119871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.119884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.124086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.124133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.124147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.128454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.128498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.128512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.132814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.132854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.132867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.137107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.137148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.137162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.141456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.141496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.141510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.145778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.145817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.145830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.150087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.020 [2024-11-04 10:07:37.150127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.020 [2024-11-04 10:07:37.150140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.020 [2024-11-04 10:07:37.154463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.154512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.154525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.158839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.158879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.158892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.163124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.163173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.163187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.167495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.167534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.167548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.171713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.171751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.171765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.176014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.176052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.176065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.180439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.180479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.180492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.021 [2024-11-04 10:07:37.184706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.021 [2024-11-04 10:07:37.184745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.021 [2024-11-04 10:07:37.184759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.188998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.280 [2024-11-04 10:07:37.189037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.280 [2024-11-04 10:07:37.189050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.193400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.280 [2024-11-04 10:07:37.193440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.280 [2024-11-04 10:07:37.193453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.197770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.280 [2024-11-04 10:07:37.197809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.280 [2024-11-04 10:07:37.197822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.202122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.280 [2024-11-04 10:07:37.202161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.280 [2024-11-04 10:07:37.202174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.206779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.280 [2024-11-04 10:07:37.206817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.280 [2024-11-04 10:07:37.206847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.211111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.280 [2024-11-04 10:07:37.211153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.280 [2024-11-04 10:07:37.211166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.280 [2024-11-04 10:07:37.215426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.215466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.215481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.219789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.219835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.219850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.224414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.224454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.224468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.229194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.229250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.229279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.233726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.233780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.233808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.238248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.238304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.238332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.242860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.242899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.242912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.247469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.247508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.247522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.252105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.252145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.252158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.256703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.256757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.256770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.261283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.261337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.261367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.265817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.265858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.265871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.270502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.270542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.270556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.275209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.275418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.275435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.280144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.280187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.280201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.284629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.284686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.284700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.289209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.289249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.289263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.293777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.293816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.293830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.298503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.298545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.298558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.303227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.303268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.303282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.307889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.307927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.307940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.312642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.312695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.312725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.317210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.317249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.317295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.321826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.321864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.321894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.326498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.326552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.326582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.331005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.331045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.331059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.335652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.281 [2024-11-04 10:07:37.335707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.281 [2024-11-04 10:07:37.335722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.281 [2024-11-04 10:07:37.340257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.340297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.340311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.344892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.344932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.344945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.349543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.349584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.349614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.354226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.354266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.354280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.358825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.358863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.358877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.363334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.363372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.363401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.367894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.367933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.367946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.372479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.372518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.372549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.377076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.377132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.377146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.381875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.381915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.381928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.386462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.386501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.386531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.391055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.391094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.391122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.395712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.395751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.395765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.400342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.400397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.400427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.405044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.405207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.405224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.409743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.409782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.409796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.414348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.414398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.414427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.418955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.418994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.419007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.423619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.423672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.423687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.428185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.428349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.428367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.432981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.433023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.433036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.437591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.437646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.437661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.442268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.442310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.442323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.282 [2024-11-04 10:07:37.446886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.282 [2024-11-04 10:07:37.447045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.282 [2024-11-04 10:07:37.447063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.451725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.451781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.451795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.456412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.456469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.456497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.461106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.461146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.461175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.465750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.465806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.465820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.470340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.470397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.470411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.475023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.475062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.475092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.479724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.479763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.479776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.484166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.484205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.484219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.488775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.488814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.488828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.493417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.493455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.493484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.498124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.498178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.498207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.502678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.502732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.502761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.507298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.507337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.507351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.511977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.512139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.512156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.516672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.516712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.543 [2024-11-04 10:07:37.516726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.543 [2024-11-04 10:07:37.521305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.543 [2024-11-04 10:07:37.521347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.521376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.525971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.526011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.526025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.530416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.530456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.530470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.534712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.534752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.534765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.538980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.539018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.539032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.543385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.543427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.543441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.548231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.548272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.548285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.552905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.552945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.552959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.557646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.557701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.557717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.562167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.562330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.562348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.566806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.566847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.566861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.571320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.571360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.571373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.575872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.575910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.575925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.580261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.580301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.580314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.584848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.584896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.584909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.589409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.589449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.589463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.593894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.593933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.593947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.598528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.598567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.598580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.603133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.603172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.603186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.607831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.607871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.607884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.612435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.612476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.612490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.616925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.616965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.616980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.621617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.621667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.621682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.626300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.626341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.626355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.630779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.630818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.630831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.635314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.635354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.635368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.639860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.639899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.639912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.544 [2024-11-04 10:07:37.644367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.544 [2024-11-04 10:07:37.644407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.544 [2024-11-04 10:07:37.644436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.649172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.649212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.649225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.653482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.653523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.653536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.657839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.657878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.657892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.662248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.662288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.662302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.666664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.666703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.666717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.671266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.671307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.671320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.675675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.675714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.675729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.680135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.680188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.684446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.684502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.684516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.688837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.688878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.688891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.693253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.693295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.693308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.697615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.697654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.697667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.701886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.701926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.701939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.706200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.706239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.706253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.545 [2024-11-04 10:07:37.710555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.545 [2024-11-04 10:07:37.710611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.545 [2024-11-04 10:07:37.710626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.714823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.714862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.714876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.719158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.719198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.719212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.723509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.723548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.723562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.727854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.727894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.727908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.732237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.732278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.732291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.736630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.736669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.736683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.740937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.740976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.740989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.745371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.745411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.745424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.805 [2024-11-04 10:07:37.749766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14815a0) 00:18:05.805 [2024-11-04 10:07:37.749804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.805 [2024-11-04 10:07:37.749818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.805 6789.00 IOPS, 848.62 MiB/s 00:18:05.805 Latency(us) 00:18:05.805 [2024-11-04T10:07:37.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.805 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:05.805 nvme0n1 : 2.00 6787.05 848.38 0.00 0.00 2353.80 1854.37 6285.50 00:18:05.805 [2024-11-04T10:07:37.975Z] =================================================================================================================== 00:18:05.805 [2024-11-04T10:07:37.975Z] Total : 6787.05 848.38 0.00 0.00 2353.80 1854.37 6285.50 00:18:05.805 { 00:18:05.805 "results": [ 00:18:05.805 { 00:18:05.805 "job": "nvme0n1", 00:18:05.805 "core_mask": "0x2", 00:18:05.805 "workload": "randread", 00:18:05.805 "status": "finished", 00:18:05.805 "queue_depth": 16, 00:18:05.805 "io_size": 131072, 00:18:05.805 "runtime": 2.002931, 00:18:05.805 "iops": 6787.053572988785, 00:18:05.805 "mibps": 848.3816966235981, 00:18:05.805 "io_failed": 0, 00:18:05.805 "io_timeout": 0, 00:18:05.805 "avg_latency_us": 2353.798637366753, 00:18:05.805 "min_latency_us": 1854.370909090909, 00:18:05.805 "max_latency_us": 6285.498181818181 00:18:05.805 } 00:18:05.805 ], 00:18:05.805 "core_count": 1 00:18:05.805 } 00:18:05.805 10:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:05.805 10:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:05.805 10:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:05.805 | .driver_specific 00:18:05.805 | .nvme_error 00:18:05.805 | .status_code 00:18:05.805 | .command_transient_transport_error' 00:18:05.805 10:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 438 > 0 )) 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80094 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80094 ']' 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80094 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80094 00:18:06.064 killing process with pid 80094 00:18:06.064 Received shutdown signal, test time was about 2.000000 seconds 00:18:06.064 00:18:06.064 Latency(us) 00:18:06.064 [2024-11-04T10:07:38.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.064 [2024-11-04T10:07:38.234Z] =================================================================================================================== 00:18:06.064 [2024-11-04T10:07:38.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80094' 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80094 00:18:06.064 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80094 00:18:06.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80154 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80154 /var/tmp/bperf.sock 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80154 ']' 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:06.324 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.324 [2024-11-04 10:07:38.415590] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:18:06.324 [2024-11-04 10:07:38.415854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80154 ] 00:18:06.582 [2024-11-04 10:07:38.559931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.582 [2024-11-04 10:07:38.622080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.582 [2024-11-04 10:07:38.681803] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:06.582 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:06.582 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:06.582 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.582 10:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:07.149 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:07.149 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.149 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.149 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.149 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.149 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.408 nvme0n1 00:18:07.408 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:07.408 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.408 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.408 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.408 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:07.408 10:07:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.408 Running I/O for 2 seconds... 00:18:07.679 [2024-11-04 10:07:39.580262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fef90 00:18:07.679 [2024-11-04 10:07:39.583025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.679 [2024-11-04 10:07:39.583069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.679 [2024-11-04 10:07:39.596933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166feb58 00:18:07.679 [2024-11-04 10:07:39.599645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.679 [2024-11-04 10:07:39.599688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.614161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fe2e8 00:18:07.680 [2024-11-04 10:07:39.616774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.616952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.631177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fda78 00:18:07.680 [2024-11-04 10:07:39.633861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.633904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.648582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fd208 00:18:07.680 [2024-11-04 10:07:39.651332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.651369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.666146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fc998 00:18:07.680 [2024-11-04 10:07:39.668766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.668927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.683731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fc128 00:18:07.680 [2024-11-04 10:07:39.686364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.686401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.701136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fb8b8 00:18:07.680 [2024-11-04 10:07:39.703708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.703748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.718060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fb048 00:18:07.680 [2024-11-04 10:07:39.720630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.720698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.735436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fa7d8 00:18:07.680 [2024-11-04 10:07:39.738059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.738097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.751901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f9f68 00:18:07.680 [2024-11-04 10:07:39.754321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.754356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.768337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f96f8 00:18:07.680 [2024-11-04 10:07:39.770683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.770714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.784814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f8e88 00:18:07.680 [2024-11-04 10:07:39.787146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.787185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.803351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f8618 00:18:07.680 [2024-11-04 10:07:39.806014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.806102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.821133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f7da8 00:18:07.680 [2024-11-04 10:07:39.823806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.823975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:07.680 [2024-11-04 10:07:39.838984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f7538 00:18:07.680 [2024-11-04 10:07:39.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.680 [2024-11-04 10:07:39.841571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.940 [2024-11-04 10:07:39.856601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f6cc8 00:18:07.940 [2024-11-04 10:07:39.859073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.940 [2024-11-04 10:07:39.859108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.940 [2024-11-04 10:07:39.873774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f6458 00:18:07.940 [2024-11-04 10:07:39.876239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.940 [2024-11-04 10:07:39.876275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:07.940 [2024-11-04 10:07:39.891440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f5be8 00:18:07.941 [2024-11-04 10:07:39.893768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.893806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:39.908552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f5378 00:18:07.941 [2024-11-04 10:07:39.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.910835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:39.925837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f4b08 00:18:07.941 [2024-11-04 10:07:39.928089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.928248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:39.942995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f4298 00:18:07.941 [2024-11-04 10:07:39.945291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.945344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:39.960688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f3a28 00:18:07.941 [2024-11-04 10:07:39.962941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.962978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:39.977990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f31b8 00:18:07.941 [2024-11-04 10:07:39.980211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.980250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:39.995454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f2948 00:18:07.941 [2024-11-04 10:07:39.997608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:39.997776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:40.012904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f20d8 00:18:07.941 [2024-11-04 10:07:40.015085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:40.015122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:40.029715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f1868 00:18:07.941 [2024-11-04 10:07:40.031811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:40.031856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:40.046372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f0ff8 00:18:07.941 [2024-11-04 10:07:40.048493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:40.048529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:40.063381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f0788 00:18:07.941 [2024-11-04 10:07:40.065589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:40.065750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:40.080813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eff18 00:18:07.941 [2024-11-04 10:07:40.083088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:40.083125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:07.941 [2024-11-04 10:07:40.097872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ef6a8 00:18:07.941 [2024-11-04 10:07:40.099832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.941 [2024-11-04 10:07:40.099988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.114475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eee38 00:18:08.200 [2024-11-04 10:07:40.116441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.116480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.131304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ee5c8 00:18:08.200 [2024-11-04 10:07:40.133289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.133442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.148363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166edd58 00:18:08.200 [2024-11-04 10:07:40.150319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.150371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.165527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ed4e8 00:18:08.200 [2024-11-04 10:07:40.167432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.167580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.182656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ecc78 00:18:08.200 [2024-11-04 10:07:40.184530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.184569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.199758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ec408 00:18:08.200 [2024-11-04 10:07:40.201636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.201677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.216700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ebb98 00:18:08.200 [2024-11-04 10:07:40.218727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.218763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.200 [2024-11-04 10:07:40.233833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eb328 00:18:08.200 [2024-11-04 10:07:40.235639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.200 [2024-11-04 10:07:40.235675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.250930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eaab8 00:18:08.201 [2024-11-04 10:07:40.252897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.252934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.268110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ea248 00:18:08.201 [2024-11-04 10:07:40.270030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.270064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.285242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e99d8 00:18:08.201 [2024-11-04 10:07:40.287259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.287295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.301809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e9168 00:18:08.201 [2024-11-04 10:07:40.303634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.303699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.318300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e88f8 00:18:08.201 [2024-11-04 10:07:40.320063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.320227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.335150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e8088 00:18:08.201 [2024-11-04 10:07:40.337004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.337042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.352035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e7818 00:18:08.201 [2024-11-04 10:07:40.353772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.353806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.201 [2024-11-04 10:07:40.368100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e6fa8 00:18:08.201 [2024-11-04 10:07:40.369798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.201 [2024-11-04 10:07:40.369830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.383536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e6738 00:18:08.460 [2024-11-04 10:07:40.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.385273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.400016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e5ec8 00:18:08.460 [2024-11-04 10:07:40.401637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.401711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.415751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e5658 00:18:08.460 [2024-11-04 10:07:40.417441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.417476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.430905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e4de8 00:18:08.460 [2024-11-04 10:07:40.432545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.432581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.447244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e4578 00:18:08.460 [2024-11-04 10:07:40.448873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.448909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.462574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e3d08 00:18:08.460 [2024-11-04 10:07:40.464197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.464232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.477667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e3498 00:18:08.460 [2024-11-04 10:07:40.479226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.479259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.494081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e2c28 00:18:08.460 [2024-11-04 10:07:40.495569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.495641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.509493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e23b8 00:18:08.460 [2024-11-04 10:07:40.511110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.511138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.524522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e1b48 00:18:08.460 [2024-11-04 10:07:40.526039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.526071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.540498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e12d8 00:18:08.460 [2024-11-04 10:07:40.542072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.542104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.460 14929.00 IOPS, 58.32 MiB/s [2024-11-04T10:07:40.630Z] [2024-11-04 10:07:40.558517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e0a68 00:18:08.460 [2024-11-04 10:07:40.560060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.560102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.575243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e01f8 00:18:08.460 [2024-11-04 10:07:40.576869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.576902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.592035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166df988 00:18:08.460 [2024-11-04 10:07:40.593462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.593631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.608795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166df118 00:18:08.460 [2024-11-04 10:07:40.610294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.610325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.460 [2024-11-04 10:07:40.625821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166de8a8 00:18:08.460 [2024-11-04 10:07:40.627178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.460 [2024-11-04 10:07:40.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.720 [2024-11-04 10:07:40.642229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166de038 00:18:08.720 [2024-11-04 10:07:40.643544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.720 [2024-11-04 10:07:40.643580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.720 [2024-11-04 10:07:40.665705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166de038 00:18:08.720 [2024-11-04 10:07:40.668313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.720 [2024-11-04 10:07:40.668351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.720 [2024-11-04 10:07:40.682385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166de8a8 00:18:08.720 [2024-11-04 10:07:40.685046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.720 [2024-11-04 10:07:40.685083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.720 [2024-11-04 10:07:40.698843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166df118 00:18:08.720 [2024-11-04 10:07:40.701378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.720 [2024-11-04 10:07:40.701569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.720 [2024-11-04 10:07:40.715701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166df988 00:18:08.720 [2024-11-04 10:07:40.718218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.720 [2024-11-04 10:07:40.718256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.720 [2024-11-04 10:07:40.731946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e01f8 00:18:08.720 [2024-11-04 10:07:40.734430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.734474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.748294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e0a68 00:18:08.721 [2024-11-04 10:07:40.750804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.750840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.765005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e12d8 00:18:08.721 [2024-11-04 10:07:40.767556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.767616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.781983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e1b48 00:18:08.721 [2024-11-04 10:07:40.784541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.784575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.798793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e23b8 00:18:08.721 [2024-11-04 10:07:40.801219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.801254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.815180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e2c28 00:18:08.721 [2024-11-04 10:07:40.817620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.817679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.831968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e3498 00:18:08.721 [2024-11-04 10:07:40.834382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.834414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.848612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e3d08 00:18:08.721 [2024-11-04 10:07:40.851028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.851208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.864764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e4578 00:18:08.721 [2024-11-04 10:07:40.867248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.867281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.721 [2024-11-04 10:07:40.881371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e4de8 00:18:08.721 [2024-11-04 10:07:40.883736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.721 [2024-11-04 10:07:40.883898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.897765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e5658 00:18:08.981 [2024-11-04 10:07:40.899997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.900051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.913428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e5ec8 00:18:08.981 [2024-11-04 10:07:40.915696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.915730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.929519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e6738 00:18:08.981 [2024-11-04 10:07:40.931868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.931907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.945885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e6fa8 00:18:08.981 [2024-11-04 10:07:40.948229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.948443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.961535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e7818 00:18:08.981 [2024-11-04 10:07:40.963699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.963733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.976985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e8088 00:18:08.981 [2024-11-04 10:07:40.979249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.979284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:40.993418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e88f8 00:18:08.981 [2024-11-04 10:07:40.995587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:40.995646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.009024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e9168 00:18:08.981 [2024-11-04 10:07:41.011168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.011202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.024502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166e99d8 00:18:08.981 [2024-11-04 10:07:41.026779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.026819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.040981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ea248 00:18:08.981 [2024-11-04 10:07:41.043181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.043230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.056620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eaab8 00:18:08.981 [2024-11-04 10:07:41.058709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.058743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.071697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eb328 00:18:08.981 [2024-11-04 10:07:41.073798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.073832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.087942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ebb98 00:18:08.981 [2024-11-04 10:07:41.090058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.090106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.104127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ec408 00:18:08.981 [2024-11-04 10:07:41.106225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.106256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.119265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ecc78 00:18:08.981 [2024-11-04 10:07:41.121341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.121374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.981 [2024-11-04 10:07:41.134848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ed4e8 00:18:08.981 [2024-11-04 10:07:41.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.981 [2024-11-04 10:07:41.136924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.151299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166edd58 00:18:09.241 [2024-11-04 10:07:41.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.153522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.167364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ee5c8 00:18:09.241 [2024-11-04 10:07:41.169375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.169412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.183413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eee38 00:18:09.241 [2024-11-04 10:07:41.185459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.185496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.199886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166ef6a8 00:18:09.241 [2024-11-04 10:07:41.202100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.202133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.215511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166eff18 00:18:09.241 [2024-11-04 10:07:41.217453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.217483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.230838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f0788 00:18:09.241 [2024-11-04 10:07:41.233041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.233080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.247362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f0ff8 00:18:09.241 [2024-11-04 10:07:41.249368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.249405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.263508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f1868 00:18:09.241 [2024-11-04 10:07:41.265340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.265375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.279068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f20d8 00:18:09.241 [2024-11-04 10:07:41.280849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.280883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.294090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f2948 00:18:09.241 [2024-11-04 10:07:41.295931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.295966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.310606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f31b8 00:18:09.241 [2024-11-04 10:07:41.312498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.312533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.326761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f3a28 00:18:09.241 [2024-11-04 10:07:41.328594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.328641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.343395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f4298 00:18:09.241 [2024-11-04 10:07:41.345381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.345415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.361168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f4b08 00:18:09.241 [2024-11-04 10:07:41.363276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.363309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.378706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f5378 00:18:09.241 [2024-11-04 10:07:41.380545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.380582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:09.241 [2024-11-04 10:07:41.395351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f5be8 00:18:09.241 [2024-11-04 10:07:41.397074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.241 [2024-11-04 10:07:41.397108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.411302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f6458 00:18:09.500 [2024-11-04 10:07:41.413014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.413092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.427575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f6cc8 00:18:09.500 [2024-11-04 10:07:41.429296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.429330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.443615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f7538 00:18:09.500 [2024-11-04 10:07:41.445383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.445420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.460140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f7da8 00:18:09.500 [2024-11-04 10:07:41.461792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.461825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.476629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f8618 00:18:09.500 [2024-11-04 10:07:41.478439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.478473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.493221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f8e88 00:18:09.500 [2024-11-04 10:07:41.494798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.494847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.509540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f96f8 00:18:09.500 [2024-11-04 10:07:41.511117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.511152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.526024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166f9f68 00:18:09.500 [2024-11-04 10:07:41.527541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.500 [2024-11-04 10:07:41.527575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:09.500 [2024-11-04 10:07:41.542473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127e750) with pdu=0x2000166fa7d8 00:18:09.501 [2024-11-04 10:07:41.544104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.501 [2024-11-04 10:07:41.544169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:09.501 15244.50 IOPS, 59.55 MiB/s 00:18:09.501 Latency(us) 00:18:09.501 [2024-11-04T10:07:41.671Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.501 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.501 nvme0n1 : 2.00 15212.74 59.42 0.00 0.00 8400.53 7179.17 31933.91 00:18:09.501 [2024-11-04T10:07:41.671Z] =================================================================================================================== 00:18:09.501 [2024-11-04T10:07:41.671Z] Total : 15212.74 59.42 0.00 0.00 8400.53 7179.17 31933.91 00:18:09.501 { 00:18:09.501 "results": [ 00:18:09.501 { 00:18:09.501 "job": "nvme0n1", 00:18:09.501 "core_mask": "0x2", 00:18:09.501 "workload": "randwrite", 00:18:09.501 "status": "finished", 00:18:09.501 "queue_depth": 128, 00:18:09.501 "io_size": 4096, 00:18:09.501 "runtime": 2.004241, 00:18:09.501 "iops": 15212.741381899681, 00:18:09.501 "mibps": 59.42477102304563, 00:18:09.501 "io_failed": 0, 00:18:09.501 "io_timeout": 0, 00:18:09.501 "avg_latency_us": 8400.531452458332, 00:18:09.501 "min_latency_us": 7179.170909090909, 00:18:09.501 "max_latency_us": 31933.905454545453 00:18:09.501 } 00:18:09.501 ], 00:18:09.501 "core_count": 1 00:18:09.501 } 00:18:09.501 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.501 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.501 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.501 | .driver_specific 00:18:09.501 | .nvme_error 00:18:09.501 | .status_code 00:18:09.501 | .command_transient_transport_error' 00:18:09.501 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:09.760 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:18:09.760 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80154 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80154 ']' 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80154 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80154 00:18:09.761 killing process with pid 80154 00:18:09.761 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.761 00:18:09.761 Latency(us) 00:18:09.761 [2024-11-04T10:07:41.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.761 [2024-11-04T10:07:41.931Z] =================================================================================================================== 00:18:09.761 [2024-11-04T10:07:41.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80154' 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80154 00:18:09.761 10:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80154 00:18:10.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80208 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80208 /var/tmp/bperf.sock 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80208 ']' 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:10.020 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.020 [2024-11-04 10:07:42.151550] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:18:10.020 [2024-11-04 10:07:42.151870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80208 ] 00:18:10.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.020 Zero copy mechanism will not be used. 00:18:10.279 [2024-11-04 10:07:42.292902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.279 [2024-11-04 10:07:42.354502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.279 [2024-11-04 10:07:42.411890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.538 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:10.538 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:18:10.538 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:10.538 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:10.797 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:10.797 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.797 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.797 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.797 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:10.797 10:07:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.056 nvme0n1 00:18:11.056 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:11.056 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.056 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.056 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.056 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:11.056 10:07:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:11.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:11.056 Zero copy mechanism will not be used. 00:18:11.056 Running I/O for 2 seconds... 00:18:11.056 [2024-11-04 10:07:43.217093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.056 [2024-11-04 10:07:43.217430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.056 [2024-11-04 10:07:43.217461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.056 [2024-11-04 10:07:43.222282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.056 [2024-11-04 10:07:43.222785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.056 [2024-11-04 10:07:43.222820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.227649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.227959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.227990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.232970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.233302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.233341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.238312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.238798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.238830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.243923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.244226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.244265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.249197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.249538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.249577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.254275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.254765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.254799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.259747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.260071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.260109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.264978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.265304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.265343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.270273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.270797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.270830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.275698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.276202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.276406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.281360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.281882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.282060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.286939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.287446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.287644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.292630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.293132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.293306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.298208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.298720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.298956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.304042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.304504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.304699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.309599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.310117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.310278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.316 [2024-11-04 10:07:43.315450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.316 [2024-11-04 10:07:43.315944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.316 [2024-11-04 10:07:43.316121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.321237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.321549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.321600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.326303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.326756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.326789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.331574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.331920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.331967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.336718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.337039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.337076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.341879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.342269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.342307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.347123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.347451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.347533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.352418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.352767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.352806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.357567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.357918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.357955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.362736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.363080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.363163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.367999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.368459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.368637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.373486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.373999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.374161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.379099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.379597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.379860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.384882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.385355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.385517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.390550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.391039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.391200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.396210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.396735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.396898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.402009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.402471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.402723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.407691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.408161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.408306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.413215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.413747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.413779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.418818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.419152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.424177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.424555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.424604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.429578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.430062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.430094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.435124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.435459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.435498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.440428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.440773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.445797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.446139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.446220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.451002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.451458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.451679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.456645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.457110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.457272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.317 [2024-11-04 10:07:43.462159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.317 [2024-11-04 10:07:43.462628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.317 [2024-11-04 10:07:43.462808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.318 [2024-11-04 10:07:43.467741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.318 [2024-11-04 10:07:43.468205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.318 [2024-11-04 10:07:43.468378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.318 [2024-11-04 10:07:43.473420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.318 [2024-11-04 10:07:43.473889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.318 [2024-11-04 10:07:43.474075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.318 [2024-11-04 10:07:43.479086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.318 [2024-11-04 10:07:43.479537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.318 [2024-11-04 10:07:43.479717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.318 [2024-11-04 10:07:43.484782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.485238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.485413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.490392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.490880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.491053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.496204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.496677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.496849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.501765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.502074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.502112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.507013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.507341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.507378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.512198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.512499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.512579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.517461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.522906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.523237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.523275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.528107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.528439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.528519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.533465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.533952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.533983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.538920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.539223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.539260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.544260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.544579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.544671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.549533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.549992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.550024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.554968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.555269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.560096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.560435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.560515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.565499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.565960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.565992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.570798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.571100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.571189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.578 [2024-11-04 10:07:43.576005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.578 [2024-11-04 10:07:43.576463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.578 [2024-11-04 10:07:43.576639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.581630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.582111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.582272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.587349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.587877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.588160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.593273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.593758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.593919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.599183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.599702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.599889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.605259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.605820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.606030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.611088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.611537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.611726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.616820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.617288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.617466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.622546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.622881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.622915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.627692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.628003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.628041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.632807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.633107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.633188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.638077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.638383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.643244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.643544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.643581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.648431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.648742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.648778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.653577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.653893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.653924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.658728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.659029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.659122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.664033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.664516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.664690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.669788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.670254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.670416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.675974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.676455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.676730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.681868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.682359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.682581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.688574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.689116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.689291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.694567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.695068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.695242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.700552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.701056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.701264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.706381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.706865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.706904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.711894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.712209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.712246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.716986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.717466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.717499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.722439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.722754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.722793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.579 [2024-11-04 10:07:43.727577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.579 [2024-11-04 10:07:43.727955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.579 [2024-11-04 10:07:43.727993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.580 [2024-11-04 10:07:43.732677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.580 [2024-11-04 10:07:43.733008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-04 10:07:43.733044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.580 [2024-11-04 10:07:43.737889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.580 [2024-11-04 10:07:43.738219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-04 10:07:43.738299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.580 [2024-11-04 10:07:43.743225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.580 [2024-11-04 10:07:43.743705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.580 [2024-11-04 10:07:43.743879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.749035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.749521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.749699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.754801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.755257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.755491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.760636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.761108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.761284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.766358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.766868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.767028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.772142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.772642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.772813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.777864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.778322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.778483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.783511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.784033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.784073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.788920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.789224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.789261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.794822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.795156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.795236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.800027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.800493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.800666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.805736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.806194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.806416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.811453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.811965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.812134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.817157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.817630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.817811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.822821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.823280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.823466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.828464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.828945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.829134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.834096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.834564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.839803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.840272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.840305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.845154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.845460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.845498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.850299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.850615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.850653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.855454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.855909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.855941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.860828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.861149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.861188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.866174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.866480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.866564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.871472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.871969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.872143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.877433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.840 [2024-11-04 10:07:43.877936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.840 [2024-11-04 10:07:43.878111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.840 [2024-11-04 10:07:43.883262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.883760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.883947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.889087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.889543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.889739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.894911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.895388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.895565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.900753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.901218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.901392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.906579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.907066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.907246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.912419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.912895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.913101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.918287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.918753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.924211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.924661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.924695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.929617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.929924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.929964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.934904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.935213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.935253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.940203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.940509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.940604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.945546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.945876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.945915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.950832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.951144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.951183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.956249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.956554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.961701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.962003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.962043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.967030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.967344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.967390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.972356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.972673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.972711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.977613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.977919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.977958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.982885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.983194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.983236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.988225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.988529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.988625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.993561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.993883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.993922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:43.998864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:43.999181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:43.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.841 [2024-11-04 10:07:44.004195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:11.841 [2024-11-04 10:07:44.004508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.841 [2024-11-04 10:07:44.004675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.009711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.010014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.010060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.014970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.015285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.015325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.020322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.020652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.020690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.025602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.025906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.025944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.030802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.031107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.031146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.036092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.036398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.036481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.041445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.041768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.041812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.046746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.047050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.101 [2024-11-04 10:07:44.047088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.101 [2024-11-04 10:07:44.052024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.101 [2024-11-04 10:07:44.052328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.052410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.057391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.057716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.057758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.062741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.063050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.063139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.068145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.068450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.068490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.073435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.073760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.073926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.078925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.079227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.079266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.084220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.084535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.084576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.089508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.089825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.089857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.094763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.095075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.095114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.100089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.100411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.100451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.105357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.105675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.105713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.110653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.110962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.110999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.115914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.116237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.116279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.121230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.121545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.121742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.126742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.127061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.127101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.132066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.132369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.132409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.137427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.137744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.137781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.142773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.143082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.143121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.148114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.148433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.148472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.153711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.154048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.154087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.159251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.159598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.159636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.164576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.164911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.170042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.170484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.170518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.175445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.175768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.175928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.180941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.181246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.181286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.186131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.186438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.186478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.191349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.191673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.102 [2024-11-04 10:07:44.191712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.102 [2024-11-04 10:07:44.196512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.102 [2024-11-04 10:07:44.196980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.197014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.201855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.202169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.202208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.207053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.207359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.207445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.103 5658.00 IOPS, 707.25 MiB/s [2024-11-04T10:07:44.273Z] [2024-11-04 10:07:44.214096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.214565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.214769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.219901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.220373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.225584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.226061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.226236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.231324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.231796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.231990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.237042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.237499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.237697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.242698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.243145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.243316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.248481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.248959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.249150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.254140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.254599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.254647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.259522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.259871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.260084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.265047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.103 [2024-11-04 10:07:44.265360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.103 [2024-11-04 10:07:44.265401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.103 [2024-11-04 10:07:44.270252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.270554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.270604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.275385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.275705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.275744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.280633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.280936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.280975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.285789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.286097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.286245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.291102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.291404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.291446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.296236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.296687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.296720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.301791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.302236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.302418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.307406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.307930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.308111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.313082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.313530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.313727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.318732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.319196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.319370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.324460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.324942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.325114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.330185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.330664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.330845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.335934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.336387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.336560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.341675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.342138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.342322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.347364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.347842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.348020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.352985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.353293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.353334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.358195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.358499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.358539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.363325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.363769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.363802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.368651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.368968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.369016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.373780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.374089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.374129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.378907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.379214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.364 [2024-11-04 10:07:44.379400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.364 [2024-11-04 10:07:44.384242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.364 [2024-11-04 10:07:44.384552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.384602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.389386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.389712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.389750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.394553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.395013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.395046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.399895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.400198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.400238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.405059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.405368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.405520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.410409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.410863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.415739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.416053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.416092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.420935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.421254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.421447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.426281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.426728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.426763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.431664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.431984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.432033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.436877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.437188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.437364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.442255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.442735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.442770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.447610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.447930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.448015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.452849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.453148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.453188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.457986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.458434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.458469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.463428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.463753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.463792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.468675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.468982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.469020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.473876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.474179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.474226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.479236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.479538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.479574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.484451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.484763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.484809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.489678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.490025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.494861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.495169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.495253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.500177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.500481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.500531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.505363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.505811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.510716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.511018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.511056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.515859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.516159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.521028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.365 [2024-11-04 10:07:44.521460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.365 [2024-11-04 10:07:44.521495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.365 [2024-11-04 10:07:44.526514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.366 [2024-11-04 10:07:44.526972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.366 [2024-11-04 10:07:44.527148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.366 [2024-11-04 10:07:44.532280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.366 [2024-11-04 10:07:44.532757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.532988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.537889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.538364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.538534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.543562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.544044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.544224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.549158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.549627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.549803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.554735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.555197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.555372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.560392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.560856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.561034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.566058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.566511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.566674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.571536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.571865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.571904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.576663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.576964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.577003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.581871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.582173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.582254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.587096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.587398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.587437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.592260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.592705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.592738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.597515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.597831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.597870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.602744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.603045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.603082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.607886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.608196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.608235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.613051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.613357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.613439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.618218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.618684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.618853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.623795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.624263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.624436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.629364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.629838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.630009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.627 [2024-11-04 10:07:44.634989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.627 [2024-11-04 10:07:44.635444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.627 [2024-11-04 10:07:44.635634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.640672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.641130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.641313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.646329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.646802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.646974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.652036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.652496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.652698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.657673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.658124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.658269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.663262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.663724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.663901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.668930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.669396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.669576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.674580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.675046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.675262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.680304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.680776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.680946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.685970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.686473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.686664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.691849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.692324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.692566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.697630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.698097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.698303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.703433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.703927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.704094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.709819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.710276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.710466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.715559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.715896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.715940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.721022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.721467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.721500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.726387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.726714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.726751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.731600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.731929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.731959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.736852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.737157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.737194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.742054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.742363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.742445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.747284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.747600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.747637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.752458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.752921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.752952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.757786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.758089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.758126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.762974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.763282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.763319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.768152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.768609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.768640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.773633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.774090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.774271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.779215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.779686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.779874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.628 [2024-11-04 10:07:44.784924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.628 [2024-11-04 10:07:44.785398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.628 [2024-11-04 10:07:44.785566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.629 [2024-11-04 10:07:44.790638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.629 [2024-11-04 10:07:44.791106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.629 [2024-11-04 10:07:44.791276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.796312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.796780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.796965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.802015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.802474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.802695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.807767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.808242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.808430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.813475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.813963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.814167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.819151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.819616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.819801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.824894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.825346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.825522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.830560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.831045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.831221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.836339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.836808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.836981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.842038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.842497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.842686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.847664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.848105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.848141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.853001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.853302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.853341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.858137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.858438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.858477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.889 [2024-11-04 10:07:44.863268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.889 [2024-11-04 10:07:44.863726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.889 [2024-11-04 10:07:44.863757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.868550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.868878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.868916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.873750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.874054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.874092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.878902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.879204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.879242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.884060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.884364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.884451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.889234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.889704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.889929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.894986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.895448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.895638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.900627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.901079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.901265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.906207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.906671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.906852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.911866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.912328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.912501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.917446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.917926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.918104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.923121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.923584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.923770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.930735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.931042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.931122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.935921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.936225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.936263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.941047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.941348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.941387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.946203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.946660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.946693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.951450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.951773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.951810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.956743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.957052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.957089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.961918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.962220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.962301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.967113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.967415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.967453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.972283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.972584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.972632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.980599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.981019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.981188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.989206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.989811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.989844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:44.997730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:44.998047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:44.998084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:45.004369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:45.004697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:45.004735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:45.011027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:45.011350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:45.011395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:45.018684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:45.019019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:45.019062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.890 [2024-11-04 10:07:45.026408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.890 [2024-11-04 10:07:45.026741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.890 [2024-11-04 10:07:45.026776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.891 [2024-11-04 10:07:45.033246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.891 [2024-11-04 10:07:45.033803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.891 [2024-11-04 10:07:45.033846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.891 [2024-11-04 10:07:45.043335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.891 [2024-11-04 10:07:45.043720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.891 [2024-11-04 10:07:45.043762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.891 [2024-11-04 10:07:45.053130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:12.891 [2024-11-04 10:07:45.053702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.891 [2024-11-04 10:07:45.053750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.150 [2024-11-04 10:07:45.063012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.150 [2024-11-04 10:07:45.063367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.063405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.072824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.073186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.073226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.082719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.083121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.092289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.092652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.092694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.101867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.102223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.102264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.111506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.111952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.121069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.121431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.121472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.130702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.131054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.131097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.140329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.140705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.140747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.149959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.150324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.150369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.159349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.159703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.159746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.168562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.168954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.169001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.176186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.176509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.176546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.182682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.183034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.189167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.189753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.189786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.196008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.196343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.196430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.202499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.202827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.202863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.151 [2024-11-04 10:07:45.209219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127ea90) with pdu=0x2000166fef90 00:18:13.151 [2024-11-04 10:07:45.209770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.151 [2024-11-04 10:07:45.209811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.151 5425.00 IOPS, 678.12 MiB/s 00:18:13.151 Latency(us) 00:18:13.151 [2024-11-04T10:07:45.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.151 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:13.151 nvme0n1 : 2.01 5420.68 677.58 0.00 0.00 2944.71 2144.81 10366.60 00:18:13.151 [2024-11-04T10:07:45.321Z] =================================================================================================================== 00:18:13.151 [2024-11-04T10:07:45.321Z] Total : 5420.68 677.58 0.00 0.00 2944.71 2144.81 10366.60 00:18:13.151 { 00:18:13.151 "results": [ 00:18:13.151 { 00:18:13.151 "job": "nvme0n1", 00:18:13.151 "core_mask": "0x2", 00:18:13.151 "workload": "randwrite", 00:18:13.151 "status": "finished", 00:18:13.151 "queue_depth": 16, 00:18:13.151 "io_size": 131072, 00:18:13.151 "runtime": 2.005284, 00:18:13.151 "iops": 5420.678567225391, 00:18:13.151 "mibps": 677.5848209031739, 00:18:13.151 "io_failed": 0, 00:18:13.151 "io_timeout": 0, 00:18:13.151 "avg_latency_us": 2944.7050758551477, 00:18:13.151 "min_latency_us": 2144.8145454545456, 00:18:13.151 "max_latency_us": 10366.603636363636 00:18:13.151 } 00:18:13.151 ], 00:18:13.151 "core_count": 1 00:18:13.151 } 00:18:13.151 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:13.151 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:13.151 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:13.151 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:13.151 | .driver_specific 00:18:13.151 | .nvme_error 00:18:13.151 | .status_code 00:18:13.151 | .command_transient_transport_error' 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 350 > 0 )) 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80208 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80208 ']' 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80208 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:13.410 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80208 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80208' 00:18:13.669 killing process with pid 80208 00:18:13.669 Received shutdown signal, test time was about 2.000000 seconds 00:18:13.669 00:18:13.669 Latency(us) 00:18:13.669 [2024-11-04T10:07:45.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.669 [2024-11-04T10:07:45.839Z] =================================================================================================================== 00:18:13.669 [2024-11-04T10:07:45.839Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80208 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80208 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79996 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79996 ']' 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79996 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79996 00:18:13.669 killing process with pid 79996 00:18:13.669 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:13.670 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:13.670 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79996' 00:18:13.670 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79996 00:18:13.670 10:07:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79996 00:18:13.928 00:18:13.928 real 0m17.839s 00:18:13.928 user 0m34.748s 00:18:13.928 sys 0m4.743s 00:18:13.928 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:13.928 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.928 ************************************ 00:18:13.928 END TEST nvmf_digest_error 00:18:13.928 ************************************ 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.186 rmmod nvme_tcp 00:18:14.186 rmmod nvme_fabrics 00:18:14.186 rmmod nvme_keyring 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79996 ']' 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79996 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 79996 ']' 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 79996 00:18:14.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (79996) - No such process 00:18:14.186 Process with pid 79996 is not found 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 79996 is not found' 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:14.186 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:18:14.445 00:18:14.445 real 0m35.901s 00:18:14.445 user 1m8.592s 00:18:14.445 sys 0m9.764s 00:18:14.445 ************************************ 00:18:14.445 END TEST nvmf_digest 00:18:14.445 ************************************ 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.445 ************************************ 00:18:14.445 START TEST nvmf_host_multipath 00:18:14.445 ************************************ 00:18:14.445 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:14.704 * Looking for test storage... 00:18:14.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:14.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.704 --rc genhtml_branch_coverage=1 00:18:14.704 --rc genhtml_function_coverage=1 00:18:14.704 --rc genhtml_legend=1 00:18:14.704 --rc geninfo_all_blocks=1 00:18:14.704 --rc geninfo_unexecuted_blocks=1 00:18:14.704 00:18:14.704 ' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:14.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.704 --rc genhtml_branch_coverage=1 00:18:14.704 --rc genhtml_function_coverage=1 00:18:14.704 --rc genhtml_legend=1 00:18:14.704 --rc geninfo_all_blocks=1 00:18:14.704 --rc geninfo_unexecuted_blocks=1 00:18:14.704 00:18:14.704 ' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:14.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.704 --rc genhtml_branch_coverage=1 00:18:14.704 --rc genhtml_function_coverage=1 00:18:14.704 --rc genhtml_legend=1 00:18:14.704 --rc geninfo_all_blocks=1 00:18:14.704 --rc geninfo_unexecuted_blocks=1 00:18:14.704 00:18:14.704 ' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:14.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.704 --rc genhtml_branch_coverage=1 00:18:14.704 --rc genhtml_function_coverage=1 00:18:14.704 --rc genhtml_legend=1 00:18:14.704 --rc geninfo_all_blocks=1 00:18:14.704 --rc geninfo_unexecuted_blocks=1 00:18:14.704 00:18:14.704 ' 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.704 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:14.705 Cannot find device "nvmf_init_br" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:14.705 Cannot find device "nvmf_init_br2" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:14.705 Cannot find device "nvmf_tgt_br" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.705 Cannot find device "nvmf_tgt_br2" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:14.705 Cannot find device "nvmf_init_br" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:14.705 Cannot find device "nvmf_init_br2" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:14.705 Cannot find device "nvmf_tgt_br" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:14.705 Cannot find device "nvmf_tgt_br2" 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:18:14.705 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:14.964 Cannot find device "nvmf_br" 00:18:14.964 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:18:14.964 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:14.964 Cannot find device "nvmf_init_if" 00:18:14.964 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:18:14.964 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:14.964 Cannot find device "nvmf_init_if2" 00:18:14.964 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:18:14.964 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:14.965 10:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.965 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:15.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:18:15.224 00:18:15.224 --- 10.0.0.3 ping statistics --- 00:18:15.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.224 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:15.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:15.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:18:15.224 00:18:15.224 --- 10.0.0.4 ping statistics --- 00:18:15.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.224 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:15.224 00:18:15.224 --- 10.0.0.1 ping statistics --- 00:18:15.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.224 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:15.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:15.224 00:18:15.224 --- 10.0.0.2 ping statistics --- 00:18:15.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.224 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80523 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80523 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80523 ']' 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.224 10:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.224 [2024-11-04 10:07:47.248131] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:18:15.224 [2024-11-04 10:07:47.248221] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.483 [2024-11-04 10:07:47.400229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:15.483 [2024-11-04 10:07:47.463128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.483 [2024-11-04 10:07:47.463189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.483 [2024-11-04 10:07:47.463203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.483 [2024-11-04 10:07:47.463213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.483 [2024-11-04 10:07:47.463222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.483 [2024-11-04 10:07:47.464401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.483 [2024-11-04 10:07:47.464419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.483 [2024-11-04 10:07:47.523736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80523 00:18:16.418 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:16.418 [2024-11-04 10:07:48.581961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.676 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:16.934 Malloc0 00:18:16.934 10:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:17.192 10:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.450 10:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:17.708 [2024-11-04 10:07:49.808623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:17.708 10:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:17.966 [2024-11-04 10:07:50.064772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80579 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80579 /var/tmp/bdevperf.sock 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80579 ']' 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.966 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.967 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:18.533 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.533 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:18:18.533 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:18.791 10:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:19.050 Nvme0n1 00:18:19.050 10:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:19.616 Nvme0n1 00:18:19.616 10:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:19.616 10:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.550 10:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:20.550 10:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:20.809 10:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:21.066 10:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:21.066 10:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80622 00:18:21.066 10:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:21.066 10:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:27.637 Attaching 4 probes... 00:18:27.637 @path[10.0.0.3, 4421]: 17765 00:18:27.637 @path[10.0.0.3, 4421]: 18187 00:18:27.637 @path[10.0.0.3, 4421]: 18154 00:18:27.637 @path[10.0.0.3, 4421]: 18123 00:18:27.637 @path[10.0.0.3, 4421]: 18281 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80622 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:27.637 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:27.895 10:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:28.154 10:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:28.154 10:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80741 00:18:28.154 10:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:28.154 10:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.719 Attaching 4 probes... 00:18:34.719 @path[10.0.0.3, 4420]: 17838 00:18:34.719 @path[10.0.0.3, 4420]: 18099 00:18:34.719 @path[10.0.0.3, 4420]: 18208 00:18:34.719 @path[10.0.0.3, 4420]: 18286 00:18:34.719 @path[10.0.0.3, 4420]: 18191 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80741 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:34.719 10:08:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:35.029 10:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:35.029 10:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80849 00:18:35.029 10:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.029 10:08:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.591 Attaching 4 probes... 00:18:41.591 @path[10.0.0.3, 4421]: 14758 00:18:41.591 @path[10.0.0.3, 4421]: 17706 00:18:41.591 @path[10.0.0.3, 4421]: 17537 00:18:41.591 @path[10.0.0.3, 4421]: 17306 00:18:41.591 @path[10.0.0.3, 4421]: 17793 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80849 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:41.591 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:41.850 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:41.850 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80967 00:18:41.850 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:41.850 10:08:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.414 Attaching 4 probes... 00:18:48.414 00:18:48.414 00:18:48.414 00:18:48.414 00:18:48.414 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80967 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:48.414 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:48.673 10:08:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:48.931 10:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:48.931 10:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81086 00:18:48.931 10:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:48.931 10:08:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.575 Attaching 4 probes... 00:18:55.575 @path[10.0.0.3, 4421]: 17247 00:18:55.575 @path[10.0.0.3, 4421]: 17767 00:18:55.575 @path[10.0.0.3, 4421]: 17261 00:18:55.575 @path[10.0.0.3, 4421]: 17290 00:18:55.575 @path[10.0.0.3, 4421]: 17186 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81086 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:55.575 [2024-11-04 10:08:27.637915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8f890 is same with the state(6) to be set 00:18:55.575 [2024-11-04 10:08:27.637972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8f890 is same with the state(6) to be set 00:18:55.575 [2024-11-04 10:08:27.637984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8f890 is same with the state(6) to be set 00:18:55.575 [2024-11-04 10:08:27.637994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8f890 is same with the state(6) to be set 00:18:55.575 10:08:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:56.512 10:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:56.512 10:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81204 00:18:56.512 10:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:56.512 10:08:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.077 Attaching 4 probes... 00:19:03.077 @path[10.0.0.3, 4420]: 16884 00:19:03.077 @path[10.0.0.3, 4420]: 17363 00:19:03.077 @path[10.0.0.3, 4420]: 18028 00:19:03.077 @path[10.0.0.3, 4420]: 17636 00:19:03.077 @path[10.0.0.3, 4420]: 17802 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81204 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.077 10:08:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:03.336 [2024-11-04 10:08:35.281883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:03.336 10:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:03.595 10:08:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:10.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:10.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81384 00:19:10.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80523 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:10.159 10:08:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:16.730 Attaching 4 probes... 00:19:16.730 @path[10.0.0.3, 4421]: 16207 00:19:16.730 @path[10.0.0.3, 4421]: 16606 00:19:16.730 @path[10.0.0.3, 4421]: 16734 00:19:16.730 @path[10.0.0.3, 4421]: 16973 00:19:16.730 @path[10.0.0.3, 4421]: 17187 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81384 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80579 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80579 ']' 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80579 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80579 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:16.730 killing process with pid 80579 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80579' 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80579 00:19:16.730 10:08:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80579 00:19:16.730 { 00:19:16.730 "results": [ 00:19:16.730 { 00:19:16.730 "job": "Nvme0n1", 00:19:16.730 "core_mask": "0x4", 00:19:16.730 "workload": "verify", 00:19:16.730 "status": "terminated", 00:19:16.730 "verify_range": { 00:19:16.730 "start": 0, 00:19:16.730 "length": 16384 00:19:16.730 }, 00:19:16.730 "queue_depth": 128, 00:19:16.730 "io_size": 4096, 00:19:16.730 "runtime": 56.308685, 00:19:16.730 "iops": 7511.541070440555, 00:19:16.730 "mibps": 29.341957306408418, 00:19:16.730 "io_failed": 0, 00:19:16.730 "io_timeout": 0, 00:19:16.730 "avg_latency_us": 17010.447920027767, 00:19:16.730 "min_latency_us": 1102.1963636363637, 00:19:16.730 "max_latency_us": 7046430.72 00:19:16.730 } 00:19:16.730 ], 00:19:16.730 "core_count": 1 00:19:16.730 } 00:19:16.730 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80579 00:19:16.730 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:16.730 [2024-11-04 10:07:50.127663] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:19:16.730 [2024-11-04 10:07:50.127789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80579 ] 00:19:16.730 [2024-11-04 10:07:50.278934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.730 [2024-11-04 10:07:50.352415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.730 [2024-11-04 10:07:50.409938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:16.730 Running I/O for 90 seconds... 00:19:16.730 6933.00 IOPS, 27.08 MiB/s [2024-11-04T10:08:48.900Z] 7933.00 IOPS, 30.99 MiB/s [2024-11-04T10:08:48.900Z] 8343.33 IOPS, 32.59 MiB/s [2024-11-04T10:08:48.900Z] 8531.50 IOPS, 33.33 MiB/s [2024-11-04T10:08:48.900Z] 8641.20 IOPS, 33.75 MiB/s [2024-11-04T10:08:48.900Z] 8714.33 IOPS, 34.04 MiB/s [2024-11-04T10:08:48.900Z] 8771.14 IOPS, 34.26 MiB/s [2024-11-04T10:08:48.900Z] 8822.12 IOPS, 34.46 MiB/s [2024-11-04T10:08:48.900Z] [2024-11-04 10:08:00.062035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.062415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.062959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.062980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.730 [2024-11-04 10:08:00.063379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.730 [2024-11-04 10:08:00.063675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:16.730 [2024-11-04 10:08:00.063697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.063746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.063765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.063786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.063803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.063836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.063855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.063878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.063900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.063935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.063952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.063974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.063990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.064716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.064963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.064986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.731 [2024-11-04 10:08:00.065328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.731 [2024-11-04 10:08:00.065610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.731 [2024-11-04 10:08:00.065633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.065671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.065968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.065985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.066561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.066577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.732 [2024-11-04 10:08:00.068116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.732 [2024-11-04 10:08:00.068797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.732 [2024-11-04 10:08:00.068814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:00.068835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:00.068851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:00.068873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:00.068889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:00.068914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:00.068932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:16.733 8837.56 IOPS, 34.52 MiB/s [2024-11-04T10:08:48.903Z] 8858.90 IOPS, 34.61 MiB/s [2024-11-04T10:08:48.903Z] 8878.09 IOPS, 34.68 MiB/s [2024-11-04T10:08:48.903Z] 8897.08 IOPS, 34.75 MiB/s [2024-11-04T10:08:48.903Z] 8916.85 IOPS, 34.83 MiB/s [2024-11-04T10:08:48.903Z] 8929.07 IOPS, 34.88 MiB/s [2024-11-04T10:08:48.903Z] 8938.53 IOPS, 34.92 MiB/s [2024-11-04T10:08:48.903Z] [2024-11-04 10:08:06.724272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.724693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.724966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.724988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.733 [2024-11-04 10:08:06.725018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.733 [2024-11-04 10:08:06.725494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.733 [2024-11-04 10:08:06.725518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.725558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.725610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.725650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.725688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.725971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.725987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.726653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.734 [2024-11-04 10:08:06.726957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.726990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.727008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.727030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.727046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.734 [2024-11-04 10:08:06.727067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.734 [2024-11-04 10:08:06.727083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.735 [2024-11-04 10:08:06.727946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.727967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.727998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.735 [2024-11-04 10:08:06.728644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:16.735 [2024-11-04 10:08:06.728666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.728682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.728703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.728719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.728740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.728756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.728777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.728793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.728814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.728829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.728851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.728868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.729547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.729956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:06.729977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:06.730342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:06.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:16.736 8388.88 IOPS, 32.77 MiB/s [2024-11-04T10:08:48.906Z] 8406.65 IOPS, 32.84 MiB/s [2024-11-04T10:08:48.906Z] 8431.17 IOPS, 32.93 MiB/s [2024-11-04T10:08:48.906Z] 8447.21 IOPS, 33.00 MiB/s [2024-11-04T10:08:48.906Z] 8459.25 IOPS, 33.04 MiB/s [2024-11-04T10:08:48.906Z] 8482.71 IOPS, 33.14 MiB/s [2024-11-04T10:08:48.906Z] 8499.32 IOPS, 33.20 MiB/s [2024-11-04T10:08:48.906Z] [2024-11-04 10:08:13.961932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.736 [2024-11-04 10:08:13.962347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.736 [2024-11-04 10:08:13.962621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.736 [2024-11-04 10:08:13.962636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.962712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.962754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.962830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.962865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.962901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.962950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.962972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.737 [2024-11-04 10:08:13.963890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.963969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.963992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:16.737 [2024-11-04 10:08:13.964458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.737 [2024-11-04 10:08:13.964471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.964899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.964941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.964974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.738 [2024-11-04 10:08:13.965276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.738 [2024-11-04 10:08:13.965893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.738 [2024-11-04 10:08:13.965907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.965933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.965957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.965994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.966865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.966976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.967041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.967076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.967111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.967147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.739 [2024-11-04 10:08:13.967191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.967228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.967264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.967300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.967335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.967371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.739 [2024-11-04 10:08:13.967405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:16.739 [2024-11-04 10:08:13.967426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:13.967440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:13.967462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:13.967479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:16.740 8229.61 IOPS, 32.15 MiB/s [2024-11-04T10:08:48.910Z] 7886.71 IOPS, 30.81 MiB/s [2024-11-04T10:08:48.910Z] 7571.24 IOPS, 29.58 MiB/s [2024-11-04T10:08:48.910Z] 7280.04 IOPS, 28.44 MiB/s [2024-11-04T10:08:48.910Z] 7010.41 IOPS, 27.38 MiB/s [2024-11-04T10:08:48.910Z] 6760.04 IOPS, 26.41 MiB/s [2024-11-04T10:08:48.910Z] 6526.93 IOPS, 25.50 MiB/s [2024-11-04T10:08:48.910Z] 6520.90 IOPS, 25.47 MiB/s [2024-11-04T10:08:48.910Z] 6595.19 IOPS, 25.76 MiB/s [2024-11-04T10:08:48.910Z] 6665.84 IOPS, 26.04 MiB/s [2024-11-04T10:08:48.910Z] 6726.00 IOPS, 26.27 MiB/s [2024-11-04T10:08:48.910Z] 6782.59 IOPS, 26.49 MiB/s [2024-11-04T10:08:48.910Z] 6834.86 IOPS, 26.70 MiB/s [2024-11-04T10:08:48.910Z] [2024-11-04 10:08:27.638671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.638718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.638780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.638803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.638849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.638890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.638906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.638928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.638944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.638965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.638981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.639018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.639055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.740 [2024-11-04 10:08:27.639918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.639947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.639976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.639991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.640005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.640021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.640034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.640050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.640063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.740 [2024-11-04 10:08:27.640079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.740 [2024-11-04 10:08:27.640092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.640784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.640971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.640991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.741 [2024-11-04 10:08:27.641252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.741 [2024-11-04 10:08:27.641267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.741 [2024-11-04 10:08:27.641281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.641490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.641985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.641999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.642027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.642056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.742 [2024-11-04 10:08:27.642085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.742 [2024-11-04 10:08:27.642293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.742 [2024-11-04 10:08:27.642307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7f320 is same with the state(6) to be set 00:19:16.742 [2024-11-04 10:08:27.642323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.742 [2024-11-04 10:08:27.642334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.742 [2024-11-04 10:08:27.642345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13712 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14248 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14256 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14264 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14280 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14288 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14296 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14312 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14320 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14328 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.642961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.642975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.642984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.642995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.643008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.643031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.643041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14344 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.643053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.643076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.643086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14352 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.643099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.643121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.643131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14360 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.643145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.643167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.643177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.643197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.743 [2024-11-04 10:08:27.643220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.743 [2024-11-04 10:08:27.643230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14376 len:8 PRP1 0x0 PRP2 0x0 00:19:16.743 [2024-11-04 10:08:27.643250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.743 [2024-11-04 10:08:27.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.743 [2024-11-04 10:08:27.643466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.743 [2024-11-04 10:08:27.643493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.743 [2024-11-04 10:08:27.643520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.743 [2024-11-04 10:08:27.643548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.743 [2024-11-04 10:08:27.643568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aebe40 is same with the state(6) to be set 00:19:16.743 [2024-11-04 10:08:27.644733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:16.743 [2024-11-04 10:08:27.644773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aebe40 (9): Bad file descriptor 00:19:16.743 [2024-11-04 10:08:27.645197] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.743 [2024-11-04 10:08:27.645232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aebe40 with addr=10.0.0.3, port=4421 00:19:16.743 [2024-11-04 10:08:27.645249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aebe40 is same with the state(6) to be set 00:19:16.743 [2024-11-04 10:08:27.645329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aebe40 (9): Bad file descriptor 00:19:16.743 [2024-11-04 10:08:27.645366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:16.744 [2024-11-04 10:08:27.645383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:16.744 [2024-11-04 10:08:27.645397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:16.744 [2024-11-04 10:08:27.645431] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:16.744 [2024-11-04 10:08:27.645450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:16.744 6888.56 IOPS, 26.91 MiB/s [2024-11-04T10:08:48.914Z] 6938.84 IOPS, 27.10 MiB/s [2024-11-04T10:08:48.914Z] 6980.89 IOPS, 27.27 MiB/s [2024-11-04T10:08:48.914Z] 7023.59 IOPS, 27.44 MiB/s [2024-11-04T10:08:48.914Z] 7069.20 IOPS, 27.61 MiB/s [2024-11-04T10:08:48.914Z] 7115.71 IOPS, 27.80 MiB/s [2024-11-04T10:08:48.914Z] 7155.98 IOPS, 27.95 MiB/s [2024-11-04T10:08:48.914Z] 7196.63 IOPS, 28.11 MiB/s [2024-11-04T10:08:48.914Z] 7231.80 IOPS, 28.25 MiB/s [2024-11-04T10:08:48.914Z] 7263.62 IOPS, 28.37 MiB/s [2024-11-04T10:08:48.914Z] [2024-11-04 10:08:37.709336] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:16.744 7294.93 IOPS, 28.50 MiB/s [2024-11-04T10:08:48.914Z] 7322.83 IOPS, 28.60 MiB/s [2024-11-04T10:08:48.914Z] 7349.44 IOPS, 28.71 MiB/s [2024-11-04T10:08:48.914Z] 7373.65 IOPS, 28.80 MiB/s [2024-11-04T10:08:48.914Z] 7397.70 IOPS, 28.90 MiB/s [2024-11-04T10:08:48.914Z] 7413.27 IOPS, 28.96 MiB/s [2024-11-04T10:08:48.914Z] 7431.02 IOPS, 29.03 MiB/s [2024-11-04T10:08:48.914Z] 7448.92 IOPS, 29.10 MiB/s [2024-11-04T10:08:48.914Z] 7466.46 IOPS, 29.17 MiB/s [2024-11-04T10:08:48.914Z] 7486.93 IOPS, 29.25 MiB/s [2024-11-04T10:08:48.914Z] 7507.23 IOPS, 29.33 MiB/s [2024-11-04T10:08:48.914Z] Received shutdown signal, test time was about 56.309502 seconds 00:19:16.744 00:19:16.744 Latency(us) 00:19:16.744 [2024-11-04T10:08:48.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.744 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:16.744 Verification LBA range: start 0x0 length 0x4000 00:19:16.744 Nvme0n1 : 56.31 7511.54 29.34 0.00 0.00 17010.45 1102.20 7046430.72 00:19:16.744 [2024-11-04T10:08:48.914Z] =================================================================================================================== 00:19:16.744 [2024-11-04T10:08:48.914Z] Total : 7511.54 29.34 0.00 0.00 17010.45 1102.20 7046430.72 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:16.744 rmmod nvme_tcp 00:19:16.744 rmmod nvme_fabrics 00:19:16.744 rmmod nvme_keyring 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80523 ']' 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80523 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80523 ']' 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80523 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80523 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80523' 00:19:16.744 killing process with pid 80523 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80523 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80523 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:16.744 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:17.002 10:08:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:17.002 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:19:17.003 00:19:17.003 real 1m2.556s 00:19:17.003 user 2m54.043s 00:19:17.003 sys 0m18.154s 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:17.003 ************************************ 00:19:17.003 END TEST nvmf_host_multipath 00:19:17.003 ************************************ 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.003 ************************************ 00:19:17.003 START TEST nvmf_timeout 00:19:17.003 ************************************ 00:19:17.003 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:17.262 * Looking for test storage... 00:19:17.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.262 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:17.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.263 --rc genhtml_branch_coverage=1 00:19:17.263 --rc genhtml_function_coverage=1 00:19:17.263 --rc genhtml_legend=1 00:19:17.263 --rc geninfo_all_blocks=1 00:19:17.263 --rc geninfo_unexecuted_blocks=1 00:19:17.263 00:19:17.263 ' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:17.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.263 --rc genhtml_branch_coverage=1 00:19:17.263 --rc genhtml_function_coverage=1 00:19:17.263 --rc genhtml_legend=1 00:19:17.263 --rc geninfo_all_blocks=1 00:19:17.263 --rc geninfo_unexecuted_blocks=1 00:19:17.263 00:19:17.263 ' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:17.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.263 --rc genhtml_branch_coverage=1 00:19:17.263 --rc genhtml_function_coverage=1 00:19:17.263 --rc genhtml_legend=1 00:19:17.263 --rc geninfo_all_blocks=1 00:19:17.263 --rc geninfo_unexecuted_blocks=1 00:19:17.263 00:19:17.263 ' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:17.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.263 --rc genhtml_branch_coverage=1 00:19:17.263 --rc genhtml_function_coverage=1 00:19:17.263 --rc genhtml_legend=1 00:19:17.263 --rc geninfo_all_blocks=1 00:19:17.263 --rc geninfo_unexecuted_blocks=1 00:19:17.263 00:19:17.263 ' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.263 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:17.263 Cannot find device "nvmf_init_br" 00:19:17.263 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:17.264 Cannot find device "nvmf_init_br2" 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:17.264 Cannot find device "nvmf_tgt_br" 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.264 Cannot find device "nvmf_tgt_br2" 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:17.264 Cannot find device "nvmf_init_br" 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:19:17.264 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:17.526 Cannot find device "nvmf_init_br2" 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:17.526 Cannot find device "nvmf_tgt_br" 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:17.526 Cannot find device "nvmf_tgt_br2" 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:17.526 Cannot find device "nvmf_br" 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:17.526 Cannot find device "nvmf_init_if" 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:17.526 Cannot find device "nvmf_init_if2" 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.526 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:17.526 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.527 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:17.786 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.786 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:19:17.786 00:19:17.786 --- 10.0.0.3 ping statistics --- 00:19:17.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.786 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:17.786 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:17.786 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:19:17.786 00:19:17.786 --- 10.0.0.4 ping statistics --- 00:19:17.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.786 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:19:17.786 00:19:17.786 --- 10.0.0.1 ping statistics --- 00:19:17.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.786 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:17.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:19:17.786 00:19:17.786 --- 10.0.0.2 ping statistics --- 00:19:17.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.786 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.786 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81741 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81741 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81741 ']' 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.787 10:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.787 [2024-11-04 10:08:49.870367] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:19:17.787 [2024-11-04 10:08:49.870474] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.045 [2024-11-04 10:08:50.023233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.045 [2024-11-04 10:08:50.096950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.045 [2024-11-04 10:08:50.097012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.045 [2024-11-04 10:08:50.097027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.045 [2024-11-04 10:08:50.097037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.045 [2024-11-04 10:08:50.097047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.045 [2024-11-04 10:08:50.098242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.045 [2024-11-04 10:08:50.098256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.045 [2024-11-04 10:08:50.160214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.304 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.562 [2024-11-04 10:08:50.573455] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.562 10:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:18.827 Malloc0 00:19:19.088 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.347 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.606 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:19.866 [2024-11-04 10:08:51.890865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81788 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81788 /var/tmp/bdevperf.sock 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81788 ']' 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.866 10:08:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:19.866 [2024-11-04 10:08:51.968769] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:19:19.866 [2024-11-04 10:08:51.968876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81788 ] 00:19:20.125 [2024-11-04 10:08:52.120040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.125 [2024-11-04 10:08:52.188853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.125 [2024-11-04 10:08:52.246203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:21.062 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:21.062 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:21.062 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:21.321 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:21.580 NVMe0n1 00:19:21.580 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81812 00:19:21.580 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.580 10:08:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:21.839 Running I/O for 10 seconds... 00:19:22.805 10:08:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:23.066 6820.00 IOPS, 26.64 MiB/s [2024-11-04T10:08:55.236Z] [2024-11-04 10:08:55.002139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.066 [2024-11-04 10:08:55.002205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.066 [2024-11-04 10:08:55.002229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.066 [2024-11-04 10:08:55.002241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.002504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.002979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.003730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.003740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.067 [2024-11-04 10:08:55.004706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.067 [2024-11-04 10:08:55.004715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.004726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.004736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.004747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.004756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.004875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.004886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.004897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.004981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.004997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.005628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.005775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.068 [2024-11-04 10:08:55.006888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.068 [2024-11-04 10:08:55.006897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.006908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.006917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.006928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.006937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.006956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.006966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.006981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.006991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.069 [2024-11-04 10:08:55.007519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.069 [2024-11-04 10:08:55.007714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.069 [2024-11-04 10:08:55.007725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.070 [2024-11-04 10:08:55.007734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.070 [2024-11-04 10:08:55.007757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.070 [2024-11-04 10:08:55.007777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.070 [2024-11-04 10:08:55.007797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.070 [2024-11-04 10:08:55.007818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.070 [2024-11-04 10:08:55.007847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.070 [2024-11-04 10:08:55.007868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.007879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf2360 is same with the state(6) to be set 00:19:23.070 [2024-11-04 10:08:55.007892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.070 [2024-11-04 10:08:55.007900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.070 [2024-11-04 10:08:55.007914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:19:23.070 [2024-11-04 10:08:55.007923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.008054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.070 [2024-11-04 10:08:55.008071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.008082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.070 [2024-11-04 10:08:55.008096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.008107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.070 [2024-11-04 10:08:55.008116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.008126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.070 [2024-11-04 10:08:55.008135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.070 [2024-11-04 10:08:55.008144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84e50 is same with the state(6) to be set 00:19:23.070 [2024-11-04 10:08:55.008386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:23.070 [2024-11-04 10:08:55.008418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84e50 (9): Bad file descriptor 00:19:23.070 [2024-11-04 10:08:55.008508] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.070 [2024-11-04 10:08:55.008528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84e50 with addr=10.0.0.3, port=4420 00:19:23.070 [2024-11-04 10:08:55.008539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84e50 is same with the state(6) to be set 00:19:23.070 [2024-11-04 10:08:55.008557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84e50 (9): Bad file descriptor 00:19:23.070 [2024-11-04 10:08:55.008573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:23.070 [2024-11-04 10:08:55.008582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:23.070 [2024-11-04 10:08:55.008607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:23.070 [2024-11-04 10:08:55.008630] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:23.070 [2024-11-04 10:08:55.008642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:23.070 10:08:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:24.945 3978.00 IOPS, 15.54 MiB/s [2024-11-04T10:08:57.115Z] 2652.00 IOPS, 10.36 MiB/s [2024-11-04T10:08:57.115Z] [2024-11-04 10:08:57.009030] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.945 [2024-11-04 10:08:57.009115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84e50 with addr=10.0.0.3, port=4420 00:19:24.945 [2024-11-04 10:08:57.009133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84e50 is same with the state(6) to be set 00:19:24.945 [2024-11-04 10:08:57.009162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84e50 (9): Bad file descriptor 00:19:24.945 [2024-11-04 10:08:57.009186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:24.945 [2024-11-04 10:08:57.009198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:24.945 [2024-11-04 10:08:57.009210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:24.945 [2024-11-04 10:08:57.009241] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:24.945 [2024-11-04 10:08:57.009254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:24.945 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:24.945 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:24.945 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:25.204 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:25.204 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:25.204 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:25.204 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:25.463 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:25.463 10:08:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:27.098 1989.00 IOPS, 7.77 MiB/s [2024-11-04T10:08:59.268Z] 1591.20 IOPS, 6.22 MiB/s [2024-11-04T10:08:59.268Z] [2024-11-04 10:08:59.009537] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.098 [2024-11-04 10:08:59.009628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84e50 with addr=10.0.0.3, port=4420 00:19:27.098 [2024-11-04 10:08:59.009649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84e50 is same with the state(6) to be set 00:19:27.098 [2024-11-04 10:08:59.009677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84e50 (9): Bad file descriptor 00:19:27.098 [2024-11-04 10:08:59.009713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:27.098 [2024-11-04 10:08:59.009726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:27.098 [2024-11-04 10:08:59.009738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:27.098 [2024-11-04 10:08:59.009770] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:27.098 [2024-11-04 10:08:59.009783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:28.971 1326.00 IOPS, 5.18 MiB/s [2024-11-04T10:09:01.141Z] 1136.57 IOPS, 4.44 MiB/s [2024-11-04T10:09:01.141Z] [2024-11-04 10:09:01.009863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:28.971 [2024-11-04 10:09:01.009933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:28.971 [2024-11-04 10:09:01.009946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:28.971 [2024-11-04 10:09:01.009956] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:19:28.971 [2024-11-04 10:09:01.009990] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:29.936 994.50 IOPS, 3.88 MiB/s 00:19:29.936 Latency(us) 00:19:29.936 [2024-11-04T10:09:02.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.936 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.936 Verification LBA range: start 0x0 length 0x4000 00:19:29.936 NVMe0n1 : 8.16 974.51 3.81 15.68 0.00 129041.89 4110.89 7015926.69 00:19:29.936 [2024-11-04T10:09:02.106Z] =================================================================================================================== 00:19:29.936 [2024-11-04T10:09:02.106Z] Total : 974.51 3.81 15.68 0.00 129041.89 4110.89 7015926.69 00:19:29.936 { 00:19:29.936 "results": [ 00:19:29.936 { 00:19:29.936 "job": "NVMe0n1", 00:19:29.936 "core_mask": "0x4", 00:19:29.936 "workload": "verify", 00:19:29.936 "status": "finished", 00:19:29.936 "verify_range": { 00:19:29.936 "start": 0, 00:19:29.936 "length": 16384 00:19:29.936 }, 00:19:29.936 "queue_depth": 128, 00:19:29.936 "io_size": 4096, 00:19:29.936 "runtime": 8.164086, 00:19:29.936 "iops": 974.512027433322, 00:19:29.936 "mibps": 3.806687607161414, 00:19:29.936 "io_failed": 128, 00:19:29.936 "io_timeout": 0, 00:19:29.936 "avg_latency_us": 129041.89040528992, 00:19:29.936 "min_latency_us": 4110.894545454546, 00:19:29.936 "max_latency_us": 7015926.69090909 00:19:29.936 } 00:19:29.936 ], 00:19:29.936 "core_count": 1 00:19:29.936 } 00:19:30.503 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:30.503 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:30.503 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:30.762 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:30.762 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:30.762 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:30.762 10:09:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81812 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81788 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81788 ']' 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81788 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81788 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:31.331 killing process with pid 81788 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81788' 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81788 00:19:31.331 Received shutdown signal, test time was about 9.399901 seconds 00:19:31.331 00:19:31.331 Latency(us) 00:19:31.331 [2024-11-04T10:09:03.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.331 [2024-11-04T10:09:03.501Z] =================================================================================================================== 00:19:31.331 [2024-11-04T10:09:03.501Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81788 00:19:31.331 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:31.590 [2024-11-04 10:09:03.713132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81940 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81940 /var/tmp/bdevperf.sock 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81940 ']' 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.590 10:09:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:31.850 [2024-11-04 10:09:03.795003] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:19:31.850 [2024-11-04 10:09:03.795104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81940 ] 00:19:31.850 [2024-11-04 10:09:03.943824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.850 [2024-11-04 10:09:04.001398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.109 [2024-11-04 10:09:04.056824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.109 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.109 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:32.109 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:32.368 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:32.626 NVMe0n1 00:19:32.626 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81955 00:19:32.626 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.626 10:09:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:32.885 Running I/O for 10 seconds... 00:19:33.824 10:09:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:34.087 6804.00 IOPS, 26.58 MiB/s [2024-11-04T10:09:06.257Z] [2024-11-04 10:09:06.079194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.087 [2024-11-04 10:09:06.079370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.079998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.080006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.080014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.080022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.080030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.080038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.080048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f80d0 is same with the state(6) to be set 00:19:34.088 [2024-11-04 10:09:06.081390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.088 [2024-11-04 10:09:06.081431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.088 [2024-11-04 10:09:06.081454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.088 [2024-11-04 10:09:06.081465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.088 [2024-11-04 10:09:06.081477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.088 [2024-11-04 10:09:06.081488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.088 [2024-11-04 10:09:06.081500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.088 [2024-11-04 10:09:06.081509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.081805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.081814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.082888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.082900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.083822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.083947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.089 [2024-11-04 10:09:06.084963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.089 [2024-11-04 10:09:06.084973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.084985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.084994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.085778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.085787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.086985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.086997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.087764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.087913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.088891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.088903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.089146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.089161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.090 [2024-11-04 10:09:06.089171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.090 [2024-11-04 10:09:06.089182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.089776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.089914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.090759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.091821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.091950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.092090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.092226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.092332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.092352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.092362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.092471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.092490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.092502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.091 [2024-11-04 10:09:06.092616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.092634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.091 [2024-11-04 10:09:06.092644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.092779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb75360 is same with the state(6) to be set 00:19:34.091 [2024-11-04 10:09:06.092934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.091 [2024-11-04 10:09:06.093068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.091 [2024-11-04 10:09:06.093086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67744 len:8 PRP1 0x0 PRP2 0x0 00:19:34.091 [2024-11-04 10:09:06.093219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.093316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.091 [2024-11-04 10:09:06.093325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.091 [2024-11-04 10:09:06.093333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67872 len:8 PRP1 0x0 PRP2 0x0 00:19:34.091 [2024-11-04 10:09:06.093342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.093352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.091 [2024-11-04 10:09:06.093359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.091 [2024-11-04 10:09:06.093366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67880 len:8 PRP1 0x0 PRP2 0x0 00:19:34.091 [2024-11-04 10:09:06.093633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.093658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.091 [2024-11-04 10:09:06.093667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.091 [2024-11-04 10:09:06.093675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67888 len:8 PRP1 0x0 PRP2 0x0 00:19:34.091 [2024-11-04 10:09:06.093684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.093693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.091 [2024-11-04 10:09:06.093700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.091 [2024-11-04 10:09:06.093708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:19:34.091 [2024-11-04 10:09:06.093977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.091 [2024-11-04 10:09:06.093991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.093998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.094006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67904 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.094015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.094025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.094285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.094296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67912 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.094316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.094324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.094332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67920 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.094341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.094621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.094632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.094640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67928 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.094650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.094659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.094666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67936 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.094783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.094794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.094802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.094934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67944 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.095086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.095227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.095356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67952 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.095367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.095499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.095629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.095650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67960 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.095892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.095913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.095921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.095929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67968 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.096170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.096188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.096196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.096204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67976 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.096213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.096222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.096229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.096236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.096245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.096356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.096366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.096375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.096502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.096640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.096774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.096790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.096800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.097048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.097059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.097068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.097077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.097086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.092 [2024-11-04 10:09:06.097093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.092 [2024-11-04 10:09:06.097101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:19:34.092 [2024-11-04 10:09:06.097346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.097721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.092 [2024-11-04 10:09:06.097749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.097761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.092 [2024-11-04 10:09:06.097770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.097780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.092 [2024-11-04 10:09:06.097789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.097798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.092 [2024-11-04 10:09:06.097807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.092 [2024-11-04 10:09:06.098042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:34.092 [2024-11-04 10:09:06.098450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:34.092 [2024-11-04 10:09:06.098496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:34.092 [2024-11-04 10:09:06.098731] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:34.092 [2024-11-04 10:09:06.098847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb07e50 with addr=10.0.0.3, port=4420 00:19:34.092 [2024-11-04 10:09:06.098865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:34.092 [2024-11-04 10:09:06.098886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:34.092 [2024-11-04 10:09:06.099146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:34.092 [2024-11-04 10:09:06.099169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:34.093 [2024-11-04 10:09:06.099186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:34.093 [2024-11-04 10:09:06.099209] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:34.093 [2024-11-04 10:09:06.099221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:34.093 10:09:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:35.029 4187.50 IOPS, 16.36 MiB/s [2024-11-04T10:09:07.199Z] [2024-11-04 10:09:07.099504] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.029 [2024-11-04 10:09:07.099586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb07e50 with addr=10.0.0.3, port=4420 00:19:35.029 [2024-11-04 10:09:07.099614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:35.029 [2024-11-04 10:09:07.099644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:35.029 [2024-11-04 10:09:07.099664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:35.029 [2024-11-04 10:09:07.099674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:35.029 [2024-11-04 10:09:07.099687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:35.029 [2024-11-04 10:09:07.099719] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:35.029 [2024-11-04 10:09:07.099732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:35.029 10:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:35.288 [2024-11-04 10:09:07.437858] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.546 10:09:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81955 00:19:36.080 2791.67 IOPS, 10.90 MiB/s [2024-11-04T10:09:08.250Z] [2024-11-04 10:09:08.113918] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:37.950 2093.75 IOPS, 8.18 MiB/s [2024-11-04T10:09:11.056Z] 3025.20 IOPS, 11.82 MiB/s [2024-11-04T10:09:11.993Z] 4094.00 IOPS, 15.99 MiB/s [2024-11-04T10:09:12.929Z] 4844.00 IOPS, 18.92 MiB/s [2024-11-04T10:09:13.866Z] 5397.75 IOPS, 21.08 MiB/s [2024-11-04T10:09:15.243Z] 5793.56 IOPS, 22.63 MiB/s [2024-11-04T10:09:15.243Z] 6151.00 IOPS, 24.03 MiB/s 00:19:43.073 Latency(us) 00:19:43.073 [2024-11-04T10:09:15.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.073 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:43.073 Verification LBA range: start 0x0 length 0x4000 00:19:43.073 NVMe0n1 : 10.01 6158.46 24.06 0.00 0.00 20753.82 1318.17 3050402.91 00:19:43.073 [2024-11-04T10:09:15.243Z] =================================================================================================================== 00:19:43.073 [2024-11-04T10:09:15.243Z] Total : 6158.46 24.06 0.00 0.00 20753.82 1318.17 3050402.91 00:19:43.073 { 00:19:43.073 "results": [ 00:19:43.073 { 00:19:43.073 "job": "NVMe0n1", 00:19:43.073 "core_mask": "0x4", 00:19:43.073 "workload": "verify", 00:19:43.073 "status": "finished", 00:19:43.073 "verify_range": { 00:19:43.073 "start": 0, 00:19:43.073 "length": 16384 00:19:43.073 }, 00:19:43.073 "queue_depth": 128, 00:19:43.073 "io_size": 4096, 00:19:43.073 "runtime": 10.008675, 00:19:43.073 "iops": 6158.45753808571, 00:19:43.073 "mibps": 24.056474758147306, 00:19:43.073 "io_failed": 0, 00:19:43.073 "io_timeout": 0, 00:19:43.073 "avg_latency_us": 20753.816009368482, 00:19:43.073 "min_latency_us": 1318.1672727272728, 00:19:43.073 "max_latency_us": 3050402.909090909 00:19:43.073 } 00:19:43.073 ], 00:19:43.073 "core_count": 1 00:19:43.073 } 00:19:43.073 10:09:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.073 10:09:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82061 00:19:43.073 10:09:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:43.073 Running I/O for 10 seconds... 00:19:44.012 10:09:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:44.012 7076.00 IOPS, 27.64 MiB/s [2024-11-04T10:09:16.182Z] [2024-11-04 10:09:16.149283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.012 [2024-11-04 10:09:16.149358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.012 [2024-11-04 10:09:16.149517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.012 [2024-11-04 10:09:16.149525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.149535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.149544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.149555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.149563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.149573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.149582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.149604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.149613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.149624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.149632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.149658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.149668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.150867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.150985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.151810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.151819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.013 [2024-11-04 10:09:16.152272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.013 [2024-11-04 10:09:16.152613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.152631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.152656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.152668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.152677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.152688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.152812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.152827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.152972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.153779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.153789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.154925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.014 [2024-11-04 10:09:16.155560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.014 [2024-11-04 10:09:16.155572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.155671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.155681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.155693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.155702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.155712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.155721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.155828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.155840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.155861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.155989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.156856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.156999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.015 [2024-11-04 10:09:16.157848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.157871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.157985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.157996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.015 [2024-11-04 10:09:16.158798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.015 [2024-11-04 10:09:16.158808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.158819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.016 [2024-11-04 10:09:16.158828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.158839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.016 [2024-11-04 10:09:16.158848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.158859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.016 [2024-11-04 10:09:16.159110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.016 [2024-11-04 10:09:16.159133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.016 [2024-11-04 10:09:16.159248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76580 is same with the state(6) to be set 00:19:44.016 [2024-11-04 10:09:16.159271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.016 [2024-11-04 10:09:16.159278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.016 [2024-11-04 10:09:16.159286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:19:44.016 [2024-11-04 10:09:16.159295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.016 [2024-11-04 10:09:16.159752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.016 [2024-11-04 10:09:16.159774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.016 [2024-11-04 10:09:16.159792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.016 [2024-11-04 10:09:16.159811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.016 [2024-11-04 10:09:16.159819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:44.016 [2024-11-04 10:09:16.160356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:44.016 [2024-11-04 10:09:16.160394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:44.016 [2024-11-04 10:09:16.160493] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.016 [2024-11-04 10:09:16.160634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb07e50 with addr=10.0.0.3, port=4420 00:19:44.016 [2024-11-04 10:09:16.160649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:44.016 [2024-11-04 10:09:16.160677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:44.016 [2024-11-04 10:09:16.160702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:44.016 [2024-11-04 10:09:16.160712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:44.016 [2024-11-04 10:09:16.160722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:44.016 [2024-11-04 10:09:16.160806] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:44.016 [2024-11-04 10:09:16.160823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:44.016 10:09:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:45.212 4106.00 IOPS, 16.04 MiB/s [2024-11-04T10:09:17.382Z] [2024-11-04 10:09:17.160973] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.212 [2024-11-04 10:09:17.161092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb07e50 with addr=10.0.0.3, port=4420 00:19:45.212 [2024-11-04 10:09:17.161109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:45.212 [2024-11-04 10:09:17.161135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:45.212 [2024-11-04 10:09:17.161154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:45.212 [2024-11-04 10:09:17.161166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:45.212 [2024-11-04 10:09:17.161179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:45.212 [2024-11-04 10:09:17.161211] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:45.212 [2024-11-04 10:09:17.161224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:46.148 2737.33 IOPS, 10.69 MiB/s [2024-11-04T10:09:18.318Z] [2024-11-04 10:09:18.161354] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.148 [2024-11-04 10:09:18.161411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb07e50 with addr=10.0.0.3, port=4420 00:19:46.148 [2024-11-04 10:09:18.161428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:46.148 [2024-11-04 10:09:18.161461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:46.148 [2024-11-04 10:09:18.161481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:46.148 [2024-11-04 10:09:18.161490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:46.148 [2024-11-04 10:09:18.161501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:46.148 [2024-11-04 10:09:18.161532] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:46.148 [2024-11-04 10:09:18.161545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:47.084 2053.00 IOPS, 8.02 MiB/s [2024-11-04T10:09:19.254Z] [2024-11-04 10:09:19.165270] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.084 [2024-11-04 10:09:19.165350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb07e50 with addr=10.0.0.3, port=4420 00:19:47.084 [2024-11-04 10:09:19.165368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb07e50 is same with the state(6) to be set 00:19:47.084 [2024-11-04 10:09:19.165934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb07e50 (9): Bad file descriptor 00:19:47.084 [2024-11-04 10:09:19.166303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:47.084 [2024-11-04 10:09:19.166328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:47.084 [2024-11-04 10:09:19.166341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:47.084 [2024-11-04 10:09:19.170313] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:47.084 [2024-11-04 10:09:19.170349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:47.084 10:09:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:47.342 [2024-11-04 10:09:19.502779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:47.600 10:09:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82061 00:19:48.118 1642.40 IOPS, 6.42 MiB/s [2024-11-04T10:09:20.288Z] [2024-11-04 10:09:20.209502] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:49.989 2589.83 IOPS, 10.12 MiB/s [2024-11-04T10:09:23.116Z] 3552.43 IOPS, 13.88 MiB/s [2024-11-04T10:09:24.060Z] 4296.88 IOPS, 16.78 MiB/s [2024-11-04T10:09:25.436Z] 4876.78 IOPS, 19.05 MiB/s [2024-11-04T10:09:25.436Z] 5349.90 IOPS, 20.90 MiB/s 00:19:53.266 Latency(us) 00:19:53.266 [2024-11-04T10:09:25.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.266 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.266 Verification LBA range: start 0x0 length 0x4000 00:19:53.266 NVMe0n1 : 10.01 5356.79 20.92 3624.24 0.00 14212.90 696.32 3019898.88 00:19:53.266 [2024-11-04T10:09:25.436Z] =================================================================================================================== 00:19:53.266 [2024-11-04T10:09:25.436Z] Total : 5356.79 20.92 3624.24 0.00 14212.90 0.00 3019898.88 00:19:53.266 { 00:19:53.266 "results": [ 00:19:53.266 { 00:19:53.266 "job": "NVMe0n1", 00:19:53.266 "core_mask": "0x4", 00:19:53.266 "workload": "verify", 00:19:53.266 "status": "finished", 00:19:53.266 "verify_range": { 00:19:53.266 "start": 0, 00:19:53.266 "length": 16384 00:19:53.266 }, 00:19:53.266 "queue_depth": 128, 00:19:53.266 "io_size": 4096, 00:19:53.266 "runtime": 10.009537, 00:19:53.266 "iops": 5356.79122820566, 00:19:53.266 "mibps": 20.92496573517836, 00:19:53.266 "io_failed": 36277, 00:19:53.266 "io_timeout": 0, 00:19:53.266 "avg_latency_us": 14212.89982347278, 00:19:53.266 "min_latency_us": 696.32, 00:19:53.266 "max_latency_us": 3019898.88 00:19:53.266 } 00:19:53.266 ], 00:19:53.266 "core_count": 1 00:19:53.266 } 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81940 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81940 ']' 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81940 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81940 00:19:53.266 killing process with pid 81940 00:19:53.266 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.266 00:19:53.266 Latency(us) 00:19:53.266 [2024-11-04T10:09:25.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.266 [2024-11-04T10:09:25.436Z] =================================================================================================================== 00:19:53.266 [2024-11-04T10:09:25.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81940' 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81940 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81940 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82170 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82170 /var/tmp/bdevperf.sock 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82170 ']' 00:19:53.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.266 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.266 [2024-11-04 10:09:25.305498] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:19:53.267 [2024-11-04 10:09:25.305618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82170 ] 00:19:53.525 [2024-11-04 10:09:25.455234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.525 [2024-11-04 10:09:25.510883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.525 [2024-11-04 10:09:25.564285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:53.525 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.525 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:53.525 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82177 00:19:53.525 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82170 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:53.525 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:53.784 10:09:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:54.352 NVMe0n1 00:19:54.352 10:09:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.352 10:09:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82220 00:19:54.352 10:09:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:54.352 Running I/O for 10 seconds... 00:19:55.288 10:09:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:55.551 14732.00 IOPS, 57.55 MiB/s [2024-11-04T10:09:27.721Z] [2024-11-04 10:09:27.605159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.605983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.605992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.606783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.606794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-11-04 10:09:27.607388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.551 [2024-11-04 10:09:27.607408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.607969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.607980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.608966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.608992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.609771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.609780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.552 [2024-11-04 10:09:27.610464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-11-04 10:09:27.610569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.610585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.610607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.610619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.610629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.610640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.610649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.610661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.610791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.610912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.610924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.610936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.611839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.611871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.612289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.612572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.612620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.612754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.612797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.613848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.613992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.553 [2024-11-04 10:09:27.614418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-11-04 10:09:27.614427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.614438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.614721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.614874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.615762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.615773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.616000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.616016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.616029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.616044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.616053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.616065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.554 [2024-11-04 10:09:27.616074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.616211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae11e0 is same with the state(6) to be set 00:19:55.554 [2024-11-04 10:09:27.616231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:55.554 [2024-11-04 10:09:27.616239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:55.554 [2024-11-04 10:09:27.616512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116704 len:8 PRP1 0x0 PRP2 0x0 00:19:55.554 [2024-11-04 10:09:27.616533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.616971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.554 [2024-11-04 10:09:27.616999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.617011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.554 [2024-11-04 10:09:27.617020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.617030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.554 [2024-11-04 10:09:27.617039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.617049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:55.554 [2024-11-04 10:09:27.617057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:55.554 [2024-11-04 10:09:27.617066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73e50 is same with the state(6) to be set 00:19:55.554 [2024-11-04 10:09:27.617537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:55.554 [2024-11-04 10:09:27.617571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73e50 (9): Bad file descriptor 00:19:55.554 [2024-11-04 10:09:27.617871] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.554 [2024-11-04 10:09:27.617904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a73e50 with addr=10.0.0.3, port=4420 00:19:55.554 [2024-11-04 10:09:27.617917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73e50 is same with the state(6) to be set 00:19:55.554 [2024-11-04 10:09:27.617937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73e50 (9): Bad file descriptor 00:19:55.554 [2024-11-04 10:09:27.617955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:55.554 [2024-11-04 10:09:27.617964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:55.554 [2024-11-04 10:09:27.618225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:55.554 [2024-11-04 10:09:27.618265] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:55.554 [2024-11-04 10:09:27.618279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:55.554 10:09:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82220 00:19:57.424 8763.00 IOPS, 34.23 MiB/s [2024-11-04T10:09:29.852Z] 5842.00 IOPS, 22.82 MiB/s [2024-11-04T10:09:29.852Z] [2024-11-04 10:09:29.618492] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.682 [2024-11-04 10:09:29.618622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a73e50 with addr=10.0.0.3, port=4420 00:19:57.682 [2024-11-04 10:09:29.618642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73e50 is same with the state(6) to be set 00:19:57.682 [2024-11-04 10:09:29.618671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73e50 (9): Bad file descriptor 00:19:57.682 [2024-11-04 10:09:29.618693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:57.683 [2024-11-04 10:09:29.618704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:57.683 [2024-11-04 10:09:29.618715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:57.683 [2024-11-04 10:09:29.618747] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:57.683 [2024-11-04 10:09:29.618760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:59.557 4381.50 IOPS, 17.12 MiB/s [2024-11-04T10:09:31.727Z] 3505.20 IOPS, 13.69 MiB/s [2024-11-04T10:09:31.727Z] [2024-11-04 10:09:31.618968] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.557 [2024-11-04 10:09:31.619076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a73e50 with addr=10.0.0.3, port=4420 00:19:59.557 [2024-11-04 10:09:31.619094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a73e50 is same with the state(6) to be set 00:19:59.557 [2024-11-04 10:09:31.619121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73e50 (9): Bad file descriptor 00:19:59.557 [2024-11-04 10:09:31.619142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:59.557 [2024-11-04 10:09:31.619152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:59.557 [2024-11-04 10:09:31.619162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:59.557 [2024-11-04 10:09:31.619197] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:59.557 [2024-11-04 10:09:31.619210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:01.429 2921.00 IOPS, 11.41 MiB/s [2024-11-04T10:09:33.858Z] 2503.71 IOPS, 9.78 MiB/s [2024-11-04T10:09:33.858Z] [2024-11-04 10:09:33.619308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:01.688 [2024-11-04 10:09:33.619394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:01.688 [2024-11-04 10:09:33.619424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:01.688 [2024-11-04 10:09:33.619434] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:01.688 [2024-11-04 10:09:33.619468] bdev_nvme.c:2248:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:02.556 2190.75 IOPS, 8.56 MiB/s 00:20:02.556 Latency(us) 00:20:02.556 [2024-11-04T10:09:34.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.556 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:02.556 NVMe0n1 : 8.20 2138.00 8.35 15.61 0.00 59335.63 8281.37 7046430.72 00:20:02.556 [2024-11-04T10:09:34.726Z] =================================================================================================================== 00:20:02.556 [2024-11-04T10:09:34.726Z] Total : 2138.00 8.35 15.61 0.00 59335.63 8281.37 7046430.72 00:20:02.556 { 00:20:02.556 "results": [ 00:20:02.556 { 00:20:02.556 "job": "NVMe0n1", 00:20:02.556 "core_mask": "0x4", 00:20:02.556 "workload": "randread", 00:20:02.556 "status": "finished", 00:20:02.556 "queue_depth": 128, 00:20:02.556 "io_size": 4096, 00:20:02.556 "runtime": 8.197397, 00:20:02.556 "iops": 2137.995756457812, 00:20:02.556 "mibps": 8.351545923663329, 00:20:02.556 "io_failed": 128, 00:20:02.556 "io_timeout": 0, 00:20:02.556 "avg_latency_us": 59335.62700557175, 00:20:02.557 "min_latency_us": 8281.367272727273, 00:20:02.557 "max_latency_us": 7046430.72 00:20:02.557 } 00:20:02.557 ], 00:20:02.557 "core_count": 1 00:20:02.557 } 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.557 Attaching 5 probes... 00:20:02.557 1453.689975: reset bdev controller NVMe0 00:20:02.557 1453.786699: reconnect bdev controller NVMe0 00:20:02.557 3454.485819: reconnect delay bdev controller NVMe0 00:20:02.557 3454.531165: reconnect bdev controller NVMe0 00:20:02.557 5454.968067: reconnect delay bdev controller NVMe0 00:20:02.557 5455.010871: reconnect bdev controller NVMe0 00:20:02.557 7455.421068: reconnect delay bdev controller NVMe0 00:20:02.557 7455.448645: reconnect bdev controller NVMe0 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82177 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82170 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82170 ']' 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82170 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82170 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82170' 00:20:02.557 killing process with pid 82170 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82170 00:20:02.557 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82170 00:20:02.557 Received shutdown signal, test time was about 8.265586 seconds 00:20:02.557 00:20:02.557 Latency(us) 00:20:02.557 [2024-11-04T10:09:34.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.557 [2024-11-04T10:09:34.727Z] =================================================================================================================== 00:20:02.557 [2024-11-04T10:09:34.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.814 10:09:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.072 rmmod nvme_tcp 00:20:03.072 rmmod nvme_fabrics 00:20:03.072 rmmod nvme_keyring 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81741 ']' 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81741 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81741 ']' 00:20:03.072 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81741 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81741 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:03.330 killing process with pid 81741 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81741' 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81741 00:20:03.330 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81741 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:03.589 00:20:03.589 real 0m46.595s 00:20:03.589 user 2m16.880s 00:20:03.589 sys 0m5.572s 00:20:03.589 ************************************ 00:20:03.589 END TEST nvmf_timeout 00:20:03.589 ************************************ 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.589 10:09:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:03.849 10:09:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:03.849 10:09:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:03.849 00:20:03.849 real 5m11.922s 00:20:03.849 user 13m37.352s 00:20:03.849 sys 1m9.657s 00:20:03.849 10:09:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.849 10:09:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.849 ************************************ 00:20:03.849 END TEST nvmf_host 00:20:03.849 ************************************ 00:20:03.849 10:09:35 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:03.849 10:09:35 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:03.849 00:20:03.849 real 12m55.397s 00:20:03.849 user 31m17.490s 00:20:03.849 sys 3m9.340s 00:20:03.849 10:09:35 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.849 ************************************ 00:20:03.849 END TEST nvmf_tcp 00:20:03.849 10:09:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:03.849 ************************************ 00:20:03.849 10:09:35 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:20:03.849 10:09:35 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:03.849 10:09:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:03.849 10:09:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:03.849 10:09:35 -- common/autotest_common.sh@10 -- # set +x 00:20:03.849 ************************************ 00:20:03.849 START TEST nvmf_dif 00:20:03.849 ************************************ 00:20:03.849 10:09:35 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:03.849 * Looking for test storage... 00:20:03.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:03.849 10:09:35 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:03.849 10:09:35 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:20:03.849 10:09:35 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:04.119 10:09:36 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.119 10:09:36 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:04.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.120 --rc genhtml_branch_coverage=1 00:20:04.120 --rc genhtml_function_coverage=1 00:20:04.120 --rc genhtml_legend=1 00:20:04.120 --rc geninfo_all_blocks=1 00:20:04.120 --rc geninfo_unexecuted_blocks=1 00:20:04.120 00:20:04.120 ' 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:04.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.120 --rc genhtml_branch_coverage=1 00:20:04.120 --rc genhtml_function_coverage=1 00:20:04.120 --rc genhtml_legend=1 00:20:04.120 --rc geninfo_all_blocks=1 00:20:04.120 --rc geninfo_unexecuted_blocks=1 00:20:04.120 00:20:04.120 ' 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:04.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.120 --rc genhtml_branch_coverage=1 00:20:04.120 --rc genhtml_function_coverage=1 00:20:04.120 --rc genhtml_legend=1 00:20:04.120 --rc geninfo_all_blocks=1 00:20:04.120 --rc geninfo_unexecuted_blocks=1 00:20:04.120 00:20:04.120 ' 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:04.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.120 --rc genhtml_branch_coverage=1 00:20:04.120 --rc genhtml_function_coverage=1 00:20:04.120 --rc genhtml_legend=1 00:20:04.120 --rc geninfo_all_blocks=1 00:20:04.120 --rc geninfo_unexecuted_blocks=1 00:20:04.120 00:20:04.120 ' 00:20:04.120 10:09:36 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.120 10:09:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.120 10:09:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.120 10:09:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.120 10:09:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.120 10:09:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:04.120 10:09:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.120 10:09:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:04.120 10:09:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:04.120 10:09:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:04.120 10:09:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:04.120 10:09:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:04.120 10:09:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:04.120 Cannot find device "nvmf_init_br" 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:04.120 Cannot find device "nvmf_init_br2" 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:04.120 Cannot find device "nvmf_tgt_br" 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.120 Cannot find device "nvmf_tgt_br2" 00:20:04.120 10:09:36 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:04.121 Cannot find device "nvmf_init_br" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:04.121 Cannot find device "nvmf_init_br2" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:04.121 Cannot find device "nvmf_tgt_br" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:04.121 Cannot find device "nvmf_tgt_br2" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:04.121 Cannot find device "nvmf_br" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:04.121 Cannot find device "nvmf_init_if" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:04.121 Cannot find device "nvmf_init_if2" 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.121 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.121 10:09:36 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:04.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:04.379 00:20:04.379 --- 10.0.0.3 ping statistics --- 00:20:04.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.379 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:04.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:20:04.379 00:20:04.379 --- 10.0.0.4 ping statistics --- 00:20:04.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.379 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:04.379 00:20:04.379 --- 10.0.0.1 ping statistics --- 00:20:04.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.379 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:04.379 10:09:36 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:20:04.379 00:20:04.380 --- 10.0.0.2 ping statistics --- 00:20:04.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.380 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:04.380 10:09:36 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.380 10:09:36 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:04.380 10:09:36 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:04.380 10:09:36 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:04.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:04.897 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:04.897 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:04.897 10:09:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:04.897 10:09:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82714 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:04.897 10:09:36 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82714 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 82714 ']' 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:04.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:04.897 10:09:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:04.897 [2024-11-04 10:09:36.943531] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:20:04.897 [2024-11-04 10:09:36.943664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.156 [2024-11-04 10:09:37.094503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.156 [2024-11-04 10:09:37.160072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.156 [2024-11-04 10:09:37.160148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.156 [2024-11-04 10:09:37.160163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.156 [2024-11-04 10:09:37.160174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.156 [2024-11-04 10:09:37.160184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.156 [2024-11-04 10:09:37.160664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.156 [2024-11-04 10:09:37.218800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.156 10:09:37 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.156 10:09:37 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:20:05.156 10:09:37 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.156 10:09:37 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.156 10:09:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:05.414 10:09:37 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.414 10:09:37 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:05.414 10:09:37 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:05.414 10:09:37 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.414 10:09:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:05.414 [2024-11-04 10:09:37.337008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.414 10:09:37 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.414 10:09:37 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:05.414 10:09:37 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:05.414 10:09:37 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.414 10:09:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:05.414 ************************************ 00:20:05.414 START TEST fio_dif_1_default 00:20:05.414 ************************************ 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:05.414 bdev_null0 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:05.414 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:05.415 [2024-11-04 10:09:37.381141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:05.415 { 00:20:05.415 "params": { 00:20:05.415 "name": "Nvme$subsystem", 00:20:05.415 "trtype": "$TEST_TRANSPORT", 00:20:05.415 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.415 "adrfam": "ipv4", 00:20:05.415 "trsvcid": "$NVMF_PORT", 00:20:05.415 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.415 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.415 "hdgst": ${hdgst:-false}, 00:20:05.415 "ddgst": ${ddgst:-false} 00:20:05.415 }, 00:20:05.415 "method": "bdev_nvme_attach_controller" 00:20:05.415 } 00:20:05.415 EOF 00:20:05.415 )") 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:05.415 "params": { 00:20:05.415 "name": "Nvme0", 00:20:05.415 "trtype": "tcp", 00:20:05.415 "traddr": "10.0.0.3", 00:20:05.415 "adrfam": "ipv4", 00:20:05.415 "trsvcid": "4420", 00:20:05.415 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.415 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:05.415 "hdgst": false, 00:20:05.415 "ddgst": false 00:20:05.415 }, 00:20:05.415 "method": "bdev_nvme_attach_controller" 00:20:05.415 }' 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.415 10:09:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.674 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:05.674 fio-3.35 00:20:05.674 Starting 1 thread 00:20:17.884 00:20:17.884 filename0: (groupid=0, jobs=1): err= 0: pid=82772: Mon Nov 4 10:09:48 2024 00:20:17.884 read: IOPS=8319, BW=32.5MiB/s (34.1MB/s)(325MiB/10001msec) 00:20:17.884 slat (nsec): min=6570, max=59743, avg=9124.97, stdev=3408.29 00:20:17.884 clat (usec): min=359, max=3884, avg=453.49, stdev=41.88 00:20:17.884 lat (usec): min=366, max=3932, avg=462.62, stdev=42.63 00:20:17.884 clat percentiles (usec): 00:20:17.884 | 1.00th=[ 408], 5.00th=[ 416], 10.00th=[ 420], 20.00th=[ 429], 00:20:17.884 | 30.00th=[ 433], 40.00th=[ 441], 50.00th=[ 445], 60.00th=[ 453], 00:20:17.884 | 70.00th=[ 461], 80.00th=[ 478], 90.00th=[ 498], 95.00th=[ 519], 00:20:17.884 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 603], 99.95th=[ 619], 00:20:17.884 | 99.99th=[ 1020] 00:20:17.884 bw ( KiB/s): min=31616, max=34432, per=100.00%, avg=33281.58, stdev=719.53, samples=19 00:20:17.884 iops : min= 7904, max= 8608, avg=8320.37, stdev=179.90, samples=19 00:20:17.884 lat (usec) : 500=90.45%, 750=9.52%, 1000=0.01% 00:20:17.884 lat (msec) : 2=0.02%, 4=0.01% 00:20:17.884 cpu : usr=84.28%, sys=13.65%, ctx=25, majf=0, minf=9 00:20:17.884 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.884 issued rwts: total=83204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.884 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:17.884 00:20:17.884 Run status group 0 (all jobs): 00:20:17.884 READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=325MiB (341MB), run=10001-10001msec 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 ************************************ 00:20:17.884 END TEST fio_dif_1_default 00:20:17.884 ************************************ 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 00:20:17.884 real 0m11.089s 00:20:17.884 user 0m9.138s 00:20:17.884 sys 0m1.659s 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:17.884 10:09:48 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:17.884 10:09:48 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 ************************************ 00:20:17.884 START TEST fio_dif_1_multi_subsystems 00:20:17.884 ************************************ 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 bdev_null0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 [2024-11-04 10:09:48.530514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 bdev_null1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:17.884 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.884 { 00:20:17.884 "params": { 00:20:17.884 "name": "Nvme$subsystem", 00:20:17.884 "trtype": "$TEST_TRANSPORT", 00:20:17.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.884 "adrfam": "ipv4", 00:20:17.885 "trsvcid": "$NVMF_PORT", 00:20:17.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.885 "hdgst": ${hdgst:-false}, 00:20:17.885 "ddgst": ${ddgst:-false} 00:20:17.885 }, 00:20:17.885 "method": "bdev_nvme_attach_controller" 00:20:17.885 } 00:20:17.885 EOF 00:20:17.885 )") 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:17.885 { 00:20:17.885 "params": { 00:20:17.885 "name": "Nvme$subsystem", 00:20:17.885 "trtype": "$TEST_TRANSPORT", 00:20:17.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.885 "adrfam": "ipv4", 00:20:17.885 "trsvcid": "$NVMF_PORT", 00:20:17.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.885 "hdgst": ${hdgst:-false}, 00:20:17.885 "ddgst": ${ddgst:-false} 00:20:17.885 }, 00:20:17.885 "method": "bdev_nvme_attach_controller" 00:20:17.885 } 00:20:17.885 EOF 00:20:17.885 )") 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:17.885 "params": { 00:20:17.885 "name": "Nvme0", 00:20:17.885 "trtype": "tcp", 00:20:17.885 "traddr": "10.0.0.3", 00:20:17.885 "adrfam": "ipv4", 00:20:17.885 "trsvcid": "4420", 00:20:17.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:17.885 "hdgst": false, 00:20:17.885 "ddgst": false 00:20:17.885 }, 00:20:17.885 "method": "bdev_nvme_attach_controller" 00:20:17.885 },{ 00:20:17.885 "params": { 00:20:17.885 "name": "Nvme1", 00:20:17.885 "trtype": "tcp", 00:20:17.885 "traddr": "10.0.0.3", 00:20:17.885 "adrfam": "ipv4", 00:20:17.885 "trsvcid": "4420", 00:20:17.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.885 "hdgst": false, 00:20:17.885 "ddgst": false 00:20:17.885 }, 00:20:17.885 "method": "bdev_nvme_attach_controller" 00:20:17.885 }' 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:17.885 10:09:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:17.885 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:17.885 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:17.885 fio-3.35 00:20:17.885 Starting 2 threads 00:20:27.863 00:20:27.863 filename0: (groupid=0, jobs=1): err= 0: pid=82933: Mon Nov 4 10:09:59 2024 00:20:27.863 read: IOPS=4601, BW=18.0MiB/s (18.8MB/s)(180MiB/10001msec) 00:20:27.863 slat (usec): min=6, max=363, avg=13.48, stdev= 5.77 00:20:27.863 clat (usec): min=481, max=2809, avg=831.73, stdev=82.26 00:20:27.863 lat (usec): min=503, max=2823, avg=845.20, stdev=82.74 00:20:27.863 clat percentiles (usec): 00:20:27.863 | 1.00th=[ 717], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 783], 00:20:27.863 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:20:27.863 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 1012], 00:20:27.863 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1369], 00:20:27.863 | 99.99th=[ 1745] 00:20:27.863 bw ( KiB/s): min=16512, max=19104, per=49.91%, avg=18374.74, stdev=699.51, samples=19 00:20:27.863 iops : min= 4128, max= 4776, avg=4593.68, stdev=174.88, samples=19 00:20:27.863 lat (usec) : 500=0.01%, 750=5.86%, 1000=88.81% 00:20:27.863 lat (msec) : 2=5.32%, 4=0.01% 00:20:27.863 cpu : usr=89.91%, sys=8.43%, ctx=57, majf=0, minf=0 00:20:27.863 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.863 issued rwts: total=46020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.863 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:27.863 filename1: (groupid=0, jobs=1): err= 0: pid=82934: Mon Nov 4 10:09:59 2024 00:20:27.863 read: IOPS=4602, BW=18.0MiB/s (18.9MB/s)(180MiB/10001msec) 00:20:27.863 slat (nsec): min=6581, max=74542, avg=13161.03, stdev=4663.21 00:20:27.863 clat (usec): min=424, max=2823, avg=833.19, stdev=86.62 00:20:27.863 lat (usec): min=432, max=2835, avg=846.35, stdev=87.44 00:20:27.863 clat percentiles (usec): 00:20:27.863 | 1.00th=[ 693], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 775], 00:20:27.863 | 30.00th=[ 791], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 832], 00:20:27.863 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 914], 95.00th=[ 1012], 00:20:27.863 | 99.00th=[ 1156], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1369], 00:20:27.863 | 99.99th=[ 1745] 00:20:27.863 bw ( KiB/s): min=16512, max=19104, per=49.92%, avg=18379.79, stdev=698.09, samples=19 00:20:27.863 iops : min= 4128, max= 4776, avg=4594.95, stdev=174.52, samples=19 00:20:27.863 lat (usec) : 500=0.03%, 750=10.42%, 1000=84.25% 00:20:27.863 lat (msec) : 2=5.30%, 4=0.01% 00:20:27.863 cpu : usr=89.55%, sys=9.07%, ctx=19, majf=0, minf=0 00:20:27.863 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.863 issued rwts: total=46032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.863 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:27.863 00:20:27.863 Run status group 0 (all jobs): 00:20:27.863 READ: bw=36.0MiB/s (37.7MB/s), 18.0MiB/s-18.0MiB/s (18.8MB/s-18.9MB/s), io=360MiB (377MB), run=10001-10001msec 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.863 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 ************************************ 00:20:27.864 END TEST fio_dif_1_multi_subsystems 00:20:27.864 ************************************ 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.864 00:20:27.864 real 0m11.245s 00:20:27.864 user 0m18.789s 00:20:27.864 sys 0m2.061s 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 10:09:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:27.864 10:09:59 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:27.864 10:09:59 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 ************************************ 00:20:27.864 START TEST fio_dif_rand_params 00:20:27.864 ************************************ 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 bdev_null0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 [2024-11-04 10:09:59.825288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.864 { 00:20:27.864 "params": { 00:20:27.864 "name": "Nvme$subsystem", 00:20:27.864 "trtype": "$TEST_TRANSPORT", 00:20:27.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.864 "adrfam": "ipv4", 00:20:27.864 "trsvcid": "$NVMF_PORT", 00:20:27.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.864 "hdgst": ${hdgst:-false}, 00:20:27.864 "ddgst": ${ddgst:-false} 00:20:27.864 }, 00:20:27.864 "method": "bdev_nvme_attach_controller" 00:20:27.864 } 00:20:27.864 EOF 00:20:27.864 )") 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:27.864 "params": { 00:20:27.864 "name": "Nvme0", 00:20:27.864 "trtype": "tcp", 00:20:27.864 "traddr": "10.0.0.3", 00:20:27.864 "adrfam": "ipv4", 00:20:27.864 "trsvcid": "4420", 00:20:27.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:27.864 "hdgst": false, 00:20:27.864 "ddgst": false 00:20:27.864 }, 00:20:27.864 "method": "bdev_nvme_attach_controller" 00:20:27.864 }' 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:27.864 10:09:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:28.123 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:28.123 ... 00:20:28.123 fio-3.35 00:20:28.123 Starting 3 threads 00:20:34.702 00:20:34.702 filename0: (groupid=0, jobs=1): err= 0: pid=83090: Mon Nov 4 10:10:05 2024 00:20:34.702 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(164MiB/5002msec) 00:20:34.702 slat (nsec): min=7560, max=73459, avg=10225.75, stdev=3988.72 00:20:34.702 clat (usec): min=4566, max=11993, avg=11419.17, stdev=334.43 00:20:34.702 lat (usec): min=4575, max=12011, avg=11429.40, stdev=333.51 00:20:34.702 clat percentiles (usec): 00:20:34.702 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:34.702 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:34.702 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11469], 00:20:34.702 | 99.00th=[11600], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:20:34.702 | 99.99th=[11994] 00:20:34.702 bw ( KiB/s): min=33024, max=33792, per=33.34%, avg=33536.00, stdev=384.00, samples=9 00:20:34.702 iops : min= 258, max= 264, avg=262.00, stdev= 3.00, samples=9 00:20:34.702 lat (msec) : 10=0.23%, 20=99.77% 00:20:34.702 cpu : usr=91.04%, sys=8.44%, ctx=11, majf=0, minf=0 00:20:34.702 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.702 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.702 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:34.702 filename0: (groupid=0, jobs=1): err= 0: pid=83091: Mon Nov 4 10:10:05 2024 00:20:34.702 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5004msec) 00:20:34.702 slat (nsec): min=5240, max=27736, avg=9989.26, stdev=3073.84 00:20:34.702 clat (usec): min=7930, max=13595, avg=11426.28, stdev=261.90 00:20:34.702 lat (usec): min=7938, max=13620, avg=11436.27, stdev=262.20 00:20:34.702 clat percentiles (usec): 00:20:34.702 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:34.702 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:34.702 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11469], 00:20:34.702 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13566], 99.95th=[13566], 00:20:34.702 | 99.99th=[13566] 00:20:34.702 bw ( KiB/s): min=33024, max=33792, per=33.26%, avg=33458.00, stdev=396.59, samples=9 00:20:34.702 iops : min= 258, max= 264, avg=261.33, stdev= 3.16, samples=9 00:20:34.702 lat (msec) : 10=0.46%, 20=99.54% 00:20:34.702 cpu : usr=91.86%, sys=7.64%, ctx=16, majf=0, minf=0 00:20:34.702 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.702 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.702 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:34.702 filename0: (groupid=0, jobs=1): err= 0: pid=83092: Mon Nov 4 10:10:05 2024 00:20:34.702 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5005msec) 00:20:34.702 slat (nsec): min=7361, max=35450, avg=10385.87, stdev=3690.59 00:20:34.702 clat (usec): min=7985, max=12380, avg=11425.57, stdev=203.16 00:20:34.702 lat (usec): min=7993, max=12415, avg=11435.96, stdev=203.37 00:20:34.702 clat percentiles (usec): 00:20:34.702 | 1.00th=[11338], 5.00th=[11338], 10.00th=[11338], 20.00th=[11338], 00:20:34.702 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11469], 60.00th=[11469], 00:20:34.702 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11469], 95.00th=[11469], 00:20:34.702 | 99.00th=[11600], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:20:34.702 | 99.99th=[12387] 00:20:34.702 bw ( KiB/s): min=33024, max=33792, per=33.26%, avg=33450.67, stdev=404.77, samples=9 00:20:34.702 iops : min= 258, max= 264, avg=261.33, stdev= 3.16, samples=9 00:20:34.702 lat (msec) : 10=0.46%, 20=99.54% 00:20:34.702 cpu : usr=91.05%, sys=8.39%, ctx=14, majf=0, minf=0 00:20:34.702 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:34.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.702 issued rwts: total=1311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.702 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:34.702 00:20:34.702 Run status group 0 (all jobs): 00:20:34.702 READ: bw=98.2MiB/s (103MB/s), 32.7MiB/s-32.8MiB/s (34.3MB/s-34.4MB/s), io=492MiB (516MB), run=5002-5005msec 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:34.702 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 bdev_null0 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 [2024-11-04 10:10:05.888076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 bdev_null1 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 bdev_null2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.703 { 00:20:34.703 "params": { 00:20:34.703 "name": "Nvme$subsystem", 00:20:34.703 "trtype": "$TEST_TRANSPORT", 00:20:34.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.703 "adrfam": "ipv4", 00:20:34.703 "trsvcid": "$NVMF_PORT", 00:20:34.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.703 "hdgst": ${hdgst:-false}, 00:20:34.703 "ddgst": ${ddgst:-false} 00:20:34.703 }, 00:20:34.703 "method": "bdev_nvme_attach_controller" 00:20:34.703 } 00:20:34.703 EOF 00:20:34.703 )") 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.703 { 00:20:34.703 "params": { 00:20:34.703 "name": "Nvme$subsystem", 00:20:34.703 "trtype": "$TEST_TRANSPORT", 00:20:34.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.703 "adrfam": "ipv4", 00:20:34.703 "trsvcid": "$NVMF_PORT", 00:20:34.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.703 "hdgst": ${hdgst:-false}, 00:20:34.703 "ddgst": ${ddgst:-false} 00:20:34.703 }, 00:20:34.703 "method": "bdev_nvme_attach_controller" 00:20:34.703 } 00:20:34.703 EOF 00:20:34.703 )") 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.703 { 00:20:34.703 "params": { 00:20:34.703 "name": "Nvme$subsystem", 00:20:34.703 "trtype": "$TEST_TRANSPORT", 00:20:34.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.703 "adrfam": "ipv4", 00:20:34.703 "trsvcid": "$NVMF_PORT", 00:20:34.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.703 "hdgst": ${hdgst:-false}, 00:20:34.703 "ddgst": ${ddgst:-false} 00:20:34.703 }, 00:20:34.703 "method": "bdev_nvme_attach_controller" 00:20:34.703 } 00:20:34.703 EOF 00:20:34.703 )") 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:34.703 10:10:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:34.703 "params": { 00:20:34.703 "name": "Nvme0", 00:20:34.703 "trtype": "tcp", 00:20:34.703 "traddr": "10.0.0.3", 00:20:34.704 "adrfam": "ipv4", 00:20:34.704 "trsvcid": "4420", 00:20:34.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:34.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:34.704 "hdgst": false, 00:20:34.704 "ddgst": false 00:20:34.704 }, 00:20:34.704 "method": "bdev_nvme_attach_controller" 00:20:34.704 },{ 00:20:34.704 "params": { 00:20:34.704 "name": "Nvme1", 00:20:34.704 "trtype": "tcp", 00:20:34.704 "traddr": "10.0.0.3", 00:20:34.704 "adrfam": "ipv4", 00:20:34.704 "trsvcid": "4420", 00:20:34.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.704 "hdgst": false, 00:20:34.704 "ddgst": false 00:20:34.704 }, 00:20:34.704 "method": "bdev_nvme_attach_controller" 00:20:34.704 },{ 00:20:34.704 "params": { 00:20:34.704 "name": "Nvme2", 00:20:34.704 "trtype": "tcp", 00:20:34.704 "traddr": "10.0.0.3", 00:20:34.704 "adrfam": "ipv4", 00:20:34.704 "trsvcid": "4420", 00:20:34.704 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:34.704 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.704 "hdgst": false, 00:20:34.704 "ddgst": false 00:20:34.704 }, 00:20:34.704 "method": "bdev_nvme_attach_controller" 00:20:34.704 }' 00:20:34.704 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:34.704 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:34.704 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.704 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.704 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:34.704 10:10:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:34.704 10:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:34.704 10:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:34.704 10:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.704 10:10:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:34.704 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:34.704 ... 00:20:34.704 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:34.704 ... 00:20:34.704 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:34.704 ... 00:20:34.704 fio-3.35 00:20:34.704 Starting 24 threads 00:20:46.917 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83187: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=241, BW=966KiB/s (989kB/s)(9664KiB/10004msec) 00:20:46.917 slat (usec): min=4, max=8041, avg=38.66, stdev=382.06 00:20:46.917 clat (msec): min=3, max=156, avg=66.06, stdev=26.09 00:20:46.917 lat (msec): min=3, max=156, avg=66.10, stdev=26.08 00:20:46.917 clat percentiles (msec): 00:20:46.917 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.917 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 70], 60.00th=[ 72], 00:20:46.917 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 107], 95.00th=[ 117], 00:20:46.917 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 157], 99.95th=[ 157], 00:20:46.917 | 99.99th=[ 157] 00:20:46.917 bw ( KiB/s): min= 664, max= 1280, per=4.19%, avg=928.00, stdev=180.08, samples=19 00:20:46.917 iops : min= 166, max= 320, avg=231.95, stdev=45.04, samples=19 00:20:46.917 lat (msec) : 4=0.25%, 10=2.40%, 20=0.95%, 50=27.40%, 100=57.00% 00:20:46.917 lat (msec) : 250=12.00% 00:20:46.917 cpu : usr=37.85%, sys=1.69%, ctx=1153, majf=0, minf=9 00:20:46.917 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:46.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 issued rwts: total=2416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83188: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=241, BW=965KiB/s (989kB/s)(9724KiB/10072msec) 00:20:46.917 slat (usec): min=5, max=8026, avg=21.16, stdev=186.36 00:20:46.917 clat (usec): min=1569, max=146931, avg=66059.79, stdev=31199.70 00:20:46.917 lat (usec): min=1578, max=146956, avg=66080.95, stdev=31203.41 00:20:46.917 clat percentiles (usec): 00:20:46.917 | 1.00th=[ 1680], 5.00th=[ 3752], 10.00th=[ 22938], 20.00th=[ 38536], 00:20:46.917 | 30.00th=[ 49546], 40.00th=[ 63701], 50.00th=[ 71828], 60.00th=[ 73925], 00:20:46.917 | 70.00th=[ 80217], 80.00th=[ 85459], 90.00th=[108528], 95.00th=[116917], 00:20:46.917 | 99.00th=[129500], 99.50th=[141558], 99.90th=[147850], 99.95th=[147850], 00:20:46.917 | 99.99th=[147850] 00:20:46.917 bw ( KiB/s): min= 648, max= 2936, per=4.36%, avg=966.00, stdev=496.75, samples=20 00:20:46.917 iops : min= 162, max= 734, avg=241.50, stdev=124.19, samples=20 00:20:46.917 lat (msec) : 2=1.97%, 4=3.74%, 10=1.44%, 20=1.40%, 50=21.97% 00:20:46.917 lat (msec) : 100=54.13%, 250=15.34% 00:20:46.917 cpu : usr=42.83%, sys=2.12%, ctx=1232, majf=0, minf=0 00:20:46.917 IO depths : 1=0.4%, 2=1.3%, 4=3.7%, 8=78.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:46.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 complete : 0=0.0%, 4=88.6%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 issued rwts: total=2431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83189: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=233, BW=932KiB/s (955kB/s)(9344KiB/10024msec) 00:20:46.917 slat (usec): min=5, max=8063, avg=33.77, stdev=287.74 00:20:46.917 clat (msec): min=22, max=153, avg=68.49, stdev=24.94 00:20:46.917 lat (msec): min=22, max=153, avg=68.53, stdev=24.94 00:20:46.917 clat percentiles (msec): 00:20:46.917 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.917 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:20:46.917 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 117], 00:20:46.917 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:20:46.917 | 99.99th=[ 155] 00:20:46.917 bw ( KiB/s): min= 688, max= 1320, per=4.19%, avg=927.75, stdev=169.12, samples=20 00:20:46.917 iops : min= 172, max= 330, avg=231.90, stdev=42.22, samples=20 00:20:46.917 lat (msec) : 50=27.65%, 100=59.46%, 250=12.89% 00:20:46.917 cpu : usr=36.58%, sys=1.66%, ctx=1102, majf=0, minf=9 00:20:46.917 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:46.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 issued rwts: total=2336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83190: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=232, BW=931KiB/s (953kB/s)(9324KiB/10015msec) 00:20:46.917 slat (usec): min=4, max=8025, avg=28.91, stdev=321.63 00:20:46.917 clat (msec): min=21, max=154, avg=68.60, stdev=25.36 00:20:46.917 lat (msec): min=21, max=154, avg=68.62, stdev=25.37 00:20:46.917 clat percentiles (msec): 00:20:46.917 | 1.00th=[ 24], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.917 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:20:46.917 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 117], 00:20:46.917 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:20:46.917 | 99.99th=[ 155] 00:20:46.917 bw ( KiB/s): min= 650, max= 1421, per=4.19%, avg=928.35, stdev=194.95, samples=20 00:20:46.917 iops : min= 162, max= 355, avg=232.05, stdev=48.74, samples=20 00:20:46.917 lat (msec) : 50=29.56%, 100=57.40%, 250=13.04% 00:20:46.917 cpu : usr=33.26%, sys=1.47%, ctx=934, majf=0, minf=9 00:20:46.917 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.7%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:46.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83191: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=216, BW=868KiB/s (889kB/s)(8700KiB/10026msec) 00:20:46.917 slat (usec): min=8, max=10028, avg=34.08, stdev=332.00 00:20:46.917 clat (msec): min=21, max=164, avg=73.57, stdev=28.06 00:20:46.917 lat (msec): min=21, max=164, avg=73.60, stdev=28.06 00:20:46.917 clat percentiles (msec): 00:20:46.917 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.917 | 30.00th=[ 55], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 78], 00:20:46.917 | 70.00th=[ 84], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 120], 00:20:46.917 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 165], 00:20:46.917 | 99.99th=[ 165] 00:20:46.917 bw ( KiB/s): min= 512, max= 1352, per=3.90%, avg=863.05, stdev=227.29, samples=20 00:20:46.917 iops : min= 128, max= 338, avg=215.75, stdev=56.80, samples=20 00:20:46.917 lat (msec) : 50=23.68%, 100=56.87%, 250=19.45% 00:20:46.917 cpu : usr=42.25%, sys=2.01%, ctx=1736, majf=0, minf=9 00:20:46.917 IO depths : 1=0.1%, 2=1.8%, 4=7.3%, 8=75.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:46.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83192: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=223, BW=895KiB/s (916kB/s)(8980KiB/10035msec) 00:20:46.917 slat (usec): min=6, max=8029, avg=22.63, stdev=227.70 00:20:46.917 clat (msec): min=14, max=179, avg=71.35, stdev=26.06 00:20:46.917 lat (msec): min=14, max=179, avg=71.38, stdev=26.06 00:20:46.917 clat percentiles (msec): 00:20:46.917 | 1.00th=[ 18], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.917 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:46.917 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 109], 95.00th=[ 121], 00:20:46.917 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:20:46.917 | 99.99th=[ 180] 00:20:46.917 bw ( KiB/s): min= 616, max= 1490, per=4.03%, avg=891.70, stdev=210.53, samples=20 00:20:46.917 iops : min= 154, max= 372, avg=222.90, stdev=52.56, samples=20 00:20:46.917 lat (msec) : 20=2.05%, 50=22.23%, 100=60.98%, 250=14.74% 00:20:46.917 cpu : usr=33.55%, sys=1.42%, ctx=922, majf=0, minf=9 00:20:46.917 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=80.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:20:46.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.917 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.917 filename0: (groupid=0, jobs=1): err= 0: pid=83193: Mon Nov 4 10:10:17 2024 00:20:46.917 read: IOPS=237, BW=951KiB/s (974kB/s)(9512KiB/10001msec) 00:20:46.917 slat (usec): min=4, max=12042, avg=41.13, stdev=445.82 00:20:46.917 clat (usec): min=935, max=154418, avg=67131.04, stdev=27668.12 00:20:46.918 lat (usec): min=943, max=154435, avg=67172.16, stdev=27681.28 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 35], 20.00th=[ 48], 00:20:46.918 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 70], 60.00th=[ 73], 00:20:46.918 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 109], 95.00th=[ 118], 00:20:46.918 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 155], 00:20:46.918 | 99.99th=[ 155] 00:20:46.918 bw ( KiB/s): min= 507, max= 1304, per=4.11%, avg=910.05, stdev=201.70, samples=19 00:20:46.918 iops : min= 126, max= 326, avg=227.47, stdev=50.51, samples=19 00:20:46.918 lat (usec) : 1000=0.25% 00:20:46.918 lat (msec) : 2=0.04%, 4=0.46%, 10=2.14%, 20=1.09%, 50=24.60% 00:20:46.918 lat (msec) : 100=57.99%, 250=13.41% 00:20:46.918 cpu : usr=36.83%, sys=1.64%, ctx=1150, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.918 filename0: (groupid=0, jobs=1): err= 0: pid=83194: Mon Nov 4 10:10:17 2024 00:20:46.918 read: IOPS=234, BW=940KiB/s (963kB/s)(9412KiB/10013msec) 00:20:46.918 slat (usec): min=8, max=11019, avg=22.43, stdev=241.52 00:20:46.918 clat (msec): min=8, max=147, avg=67.98, stdev=24.62 00:20:46.918 lat (msec): min=8, max=147, avg=68.00, stdev=24.61 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 48], 00:20:46.918 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:20:46.918 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 116], 00:20:46.918 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:20:46.918 | 99.99th=[ 148] 00:20:46.918 bw ( KiB/s): min= 638, max= 1296, per=4.18%, avg=924.53, stdev=175.15, samples=19 00:20:46.918 iops : min= 159, max= 324, avg=231.11, stdev=43.83, samples=19 00:20:46.918 lat (msec) : 10=0.38%, 20=0.85%, 50=28.64%, 100=58.44%, 250=11.69% 00:20:46.918 cpu : usr=35.78%, sys=1.63%, ctx=1042, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.918 filename1: (groupid=0, jobs=1): err= 0: pid=83195: Mon Nov 4 10:10:17 2024 00:20:46.918 read: IOPS=227, BW=909KiB/s (931kB/s)(9132KiB/10044msec) 00:20:46.918 slat (usec): min=6, max=8030, avg=24.91, stdev=290.43 00:20:46.918 clat (msec): min=10, max=154, avg=70.21, stdev=27.22 00:20:46.918 lat (msec): min=10, max=154, avg=70.23, stdev=27.22 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 48], 00:20:46.918 | 30.00th=[ 58], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:20:46.918 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 109], 95.00th=[ 118], 00:20:46.918 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:20:46.918 | 99.99th=[ 155] 00:20:46.918 bw ( KiB/s): min= 608, max= 1664, per=4.09%, avg=906.80, stdev=247.10, samples=20 00:20:46.918 iops : min= 152, max= 416, avg=226.70, stdev=61.77, samples=20 00:20:46.918 lat (msec) : 20=2.10%, 50=25.41%, 100=57.64%, 250=14.85% 00:20:46.918 cpu : usr=33.44%, sys=1.52%, ctx=911, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.918 filename1: (groupid=0, jobs=1): err= 0: pid=83196: Mon Nov 4 10:10:17 2024 00:20:46.918 read: IOPS=216, BW=864KiB/s (885kB/s)(8660KiB/10018msec) 00:20:46.918 slat (usec): min=8, max=8046, avg=27.65, stdev=238.85 00:20:46.918 clat (msec): min=23, max=160, avg=73.87, stdev=26.19 00:20:46.918 lat (msec): min=23, max=160, avg=73.90, stdev=26.19 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 49], 00:20:46.918 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 79], 00:20:46.918 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 114], 95.00th=[ 120], 00:20:46.918 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 161], 00:20:46.918 | 99.99th=[ 161] 00:20:46.918 bw ( KiB/s): min= 523, max= 1168, per=3.89%, avg=861.25, stdev=186.08, samples=20 00:20:46.918 iops : min= 130, max= 292, avg=215.25, stdev=46.55, samples=20 00:20:46.918 lat (msec) : 50=22.40%, 100=59.63%, 250=17.97% 00:20:46.918 cpu : usr=39.78%, sys=1.74%, ctx=1432, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.0%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=89.2%, 8=9.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.918 filename1: (groupid=0, jobs=1): err= 0: pid=83197: Mon Nov 4 10:10:17 2024 00:20:46.918 read: IOPS=234, BW=939KiB/s (962kB/s)(9400KiB/10007msec) 00:20:46.918 slat (usec): min=8, max=8037, avg=26.55, stdev=286.22 00:20:46.918 clat (msec): min=3, max=143, avg=68.01, stdev=26.32 00:20:46.918 lat (msec): min=3, max=143, avg=68.04, stdev=26.31 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 8], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.918 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:20:46.918 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 117], 00:20:46.918 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:20:46.918 | 99.99th=[ 144] 00:20:46.918 bw ( KiB/s): min= 638, max= 1269, per=4.10%, avg=907.95, stdev=186.93, samples=19 00:20:46.918 iops : min= 159, max= 317, avg=226.95, stdev=46.75, samples=19 00:20:46.918 lat (msec) : 4=0.13%, 10=1.40%, 20=1.45%, 50=27.96%, 100=54.94% 00:20:46.918 lat (msec) : 250=14.13% 00:20:46.918 cpu : usr=36.45%, sys=1.65%, ctx=1114, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.918 filename1: (groupid=0, jobs=1): err= 0: pid=83198: Mon Nov 4 10:10:17 2024 00:20:46.918 read: IOPS=232, BW=931KiB/s (953kB/s)(9336KiB/10030msec) 00:20:46.918 slat (usec): min=5, max=4031, avg=20.29, stdev=117.62 00:20:46.918 clat (msec): min=14, max=150, avg=68.59, stdev=26.43 00:20:46.918 lat (msec): min=14, max=150, avg=68.61, stdev=26.42 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 17], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 48], 00:20:46.918 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 74], 00:20:46.918 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 118], 00:20:46.918 | 99.00th=[ 128], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:20:46.918 | 99.99th=[ 150] 00:20:46.918 bw ( KiB/s): min= 616, max= 1728, per=4.20%, avg=930.00, stdev=244.06, samples=20 00:20:46.918 iops : min= 154, max= 432, avg=232.50, stdev=61.02, samples=20 00:20:46.918 lat (msec) : 20=1.71%, 50=26.39%, 100=58.05%, 250=13.84% 00:20:46.918 cpu : usr=41.98%, sys=1.67%, ctx=1252, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.918 filename1: (groupid=0, jobs=1): err= 0: pid=83199: Mon Nov 4 10:10:17 2024 00:20:46.918 read: IOPS=236, BW=944KiB/s (967kB/s)(9452KiB/10008msec) 00:20:46.918 slat (usec): min=4, max=9020, avg=31.92, stdev=349.52 00:20:46.918 clat (msec): min=12, max=157, avg=67.62, stdev=24.87 00:20:46.918 lat (msec): min=12, max=157, avg=67.65, stdev=24.88 00:20:46.918 clat percentiles (msec): 00:20:46.918 | 1.00th=[ 23], 5.00th=[ 26], 10.00th=[ 37], 20.00th=[ 48], 00:20:46.918 | 30.00th=[ 50], 40.00th=[ 60], 50.00th=[ 71], 60.00th=[ 72], 00:20:46.918 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 107], 95.00th=[ 116], 00:20:46.918 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:20:46.918 | 99.99th=[ 157] 00:20:46.918 bw ( KiB/s): min= 640, max= 1384, per=4.23%, avg=936.74, stdev=194.81, samples=19 00:20:46.918 iops : min= 160, max= 346, avg=234.16, stdev=48.73, samples=19 00:20:46.918 lat (msec) : 20=0.51%, 50=30.55%, 100=56.92%, 250=12.02% 00:20:46.918 cpu : usr=33.49%, sys=1.37%, ctx=998, majf=0, minf=9 00:20:46.918 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=83.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:46.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.918 issued rwts: total=2363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename1: (groupid=0, jobs=1): err= 0: pid=83200: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=238, BW=953KiB/s (976kB/s)(9540KiB/10007msec) 00:20:46.919 slat (usec): min=5, max=8024, avg=29.77, stdev=246.20 00:20:46.919 clat (msec): min=9, max=150, avg=67.01, stdev=25.47 00:20:46.919 lat (msec): min=9, max=150, avg=67.04, stdev=25.48 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 14], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.919 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 70], 60.00th=[ 73], 00:20:46.919 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 107], 95.00th=[ 116], 00:20:46.919 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 150], 99.95th=[ 150], 00:20:46.919 | 99.99th=[ 150] 00:20:46.919 bw ( KiB/s): min= 632, max= 1277, per=4.21%, avg=932.37, stdev=199.85, samples=19 00:20:46.919 iops : min= 158, max= 319, avg=233.05, stdev=49.97, samples=19 00:20:46.919 lat (msec) : 10=0.25%, 20=2.14%, 50=26.25%, 100=58.74%, 250=12.62% 00:20:46.919 cpu : usr=43.65%, sys=1.79%, ctx=1427, majf=0, minf=9 00:20:46.919 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:46.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 issued rwts: total=2385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename1: (groupid=0, jobs=1): err= 0: pid=83201: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=238, BW=952KiB/s (975kB/s)(9532KiB/10010msec) 00:20:46.919 slat (usec): min=4, max=8041, avg=31.61, stdev=313.16 00:20:46.919 clat (msec): min=21, max=150, avg=67.06, stdev=24.46 00:20:46.919 lat (msec): min=22, max=150, avg=67.09, stdev=24.46 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.919 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 71], 60.00th=[ 72], 00:20:46.919 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 107], 95.00th=[ 117], 00:20:46.919 | 99.00th=[ 124], 99.50th=[ 133], 99.90th=[ 150], 99.95th=[ 150], 00:20:46.919 | 99.99th=[ 150] 00:20:46.919 bw ( KiB/s): min= 664, max= 1376, per=4.29%, avg=949.37, stdev=200.88, samples=19 00:20:46.919 iops : min= 166, max= 344, avg=237.32, stdev=50.25, samples=19 00:20:46.919 lat (msec) : 50=31.64%, 100=56.40%, 250=11.96% 00:20:46.919 cpu : usr=32.86%, sys=1.32%, ctx=911, majf=0, minf=9 00:20:46.919 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:46.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 issued rwts: total=2383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename1: (groupid=0, jobs=1): err= 0: pid=83202: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=238, BW=955KiB/s (978kB/s)(9552KiB/10004msec) 00:20:46.919 slat (nsec): min=8003, max=59259, avg=16365.95, stdev=8104.03 00:20:46.919 clat (msec): min=3, max=155, avg=66.95, stdev=26.16 00:20:46.919 lat (msec): min=3, max=155, avg=66.96, stdev=26.16 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.919 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:20:46.919 | 70.00th=[ 80], 80.00th=[ 84], 90.00th=[ 108], 95.00th=[ 117], 00:20:46.919 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:20:46.919 | 99.99th=[ 155] 00:20:46.919 bw ( KiB/s): min= 664, max= 1280, per=4.17%, avg=922.00, stdev=194.76, samples=19 00:20:46.919 iops : min= 166, max= 320, avg=230.47, stdev=48.72, samples=19 00:20:46.919 lat (msec) : 4=0.13%, 10=2.05%, 20=0.92%, 50=27.51%, 100=56.83% 00:20:46.919 lat (msec) : 250=12.56% 00:20:46.919 cpu : usr=36.10%, sys=1.49%, ctx=1053, majf=0, minf=9 00:20:46.919 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:46.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename2: (groupid=0, jobs=1): err= 0: pid=83203: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=234, BW=939KiB/s (962kB/s)(9456KiB/10070msec) 00:20:46.919 slat (usec): min=4, max=12048, avg=36.16, stdev=418.34 00:20:46.919 clat (msec): min=2, max=191, avg=67.85, stdev=29.37 00:20:46.919 lat (msec): min=2, max=191, avg=67.89, stdev=29.36 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 28], 20.00th=[ 45], 00:20:46.919 | 30.00th=[ 53], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 77], 00:20:46.919 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 117], 00:20:46.919 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 165], 00:20:46.919 | 99.99th=[ 192] 00:20:46.919 bw ( KiB/s): min= 680, max= 2372, per=4.24%, avg=939.40, stdev=384.99, samples=20 00:20:46.919 iops : min= 170, max= 593, avg=234.85, stdev=96.25, samples=20 00:20:46.919 lat (msec) : 4=2.58%, 10=1.40%, 20=2.03%, 50=21.62%, 100=57.23% 00:20:46.919 lat (msec) : 250=15.14% 00:20:46.919 cpu : usr=39.44%, sys=1.63%, ctx=1200, majf=0, minf=0 00:20:46.919 IO depths : 1=0.3%, 2=0.8%, 4=2.2%, 8=80.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:46.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename2: (groupid=0, jobs=1): err= 0: pid=83204: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=231, BW=926KiB/s (948kB/s)(9300KiB/10044msec) 00:20:46.919 slat (usec): min=8, max=8026, avg=27.91, stdev=304.52 00:20:46.919 clat (msec): min=10, max=155, avg=68.94, stdev=26.81 00:20:46.919 lat (msec): min=10, max=155, avg=68.97, stdev=26.82 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 14], 5.00th=[ 24], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.919 | 30.00th=[ 51], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 73], 00:20:46.919 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 121], 00:20:46.919 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:20:46.919 | 99.99th=[ 157] 00:20:46.919 bw ( KiB/s): min= 616, max= 1720, per=4.17%, avg=923.60, stdev=246.28, samples=20 00:20:46.919 iops : min= 154, max= 430, avg=230.90, stdev=61.57, samples=20 00:20:46.919 lat (msec) : 20=2.02%, 50=28.09%, 100=55.40%, 250=14.49% 00:20:46.919 cpu : usr=33.05%, sys=1.29%, ctx=927, majf=0, minf=9 00:20:46.919 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:46.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename2: (groupid=0, jobs=1): err= 0: pid=83205: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=221, BW=888KiB/s (909kB/s)(8912KiB/10038msec) 00:20:46.919 slat (usec): min=6, max=6260, avg=19.23, stdev=151.55 00:20:46.919 clat (msec): min=15, max=160, avg=71.91, stdev=28.47 00:20:46.919 lat (msec): min=15, max=160, avg=71.93, stdev=28.47 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 48], 00:20:46.919 | 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 78], 00:20:46.919 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 113], 95.00th=[ 121], 00:20:46.919 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 161], 00:20:46.919 | 99.99th=[ 161] 00:20:46.919 bw ( KiB/s): min= 512, max= 1644, per=3.99%, avg=884.85, stdev=260.50, samples=20 00:20:46.919 iops : min= 128, max= 411, avg=221.15, stdev=65.09, samples=20 00:20:46.919 lat (msec) : 20=2.06%, 50=21.27%, 100=58.98%, 250=17.68% 00:20:46.919 cpu : usr=44.29%, sys=1.99%, ctx=1498, majf=0, minf=9 00:20:46.919 IO depths : 1=0.1%, 2=1.5%, 4=6.3%, 8=76.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:46.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.919 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.919 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.919 filename2: (groupid=0, jobs=1): err= 0: pid=83206: Mon Nov 4 10:10:17 2024 00:20:46.919 read: IOPS=231, BW=925KiB/s (947kB/s)(9288KiB/10045msec) 00:20:46.919 slat (usec): min=5, max=4028, avg=17.36, stdev=83.74 00:20:46.919 clat (msec): min=9, max=155, avg=69.06, stdev=26.27 00:20:46.919 lat (msec): min=9, max=155, avg=69.08, stdev=26.27 00:20:46.919 clat percentiles (msec): 00:20:46.919 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 32], 20.00th=[ 48], 00:20:46.919 | 30.00th=[ 55], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 74], 00:20:46.919 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 116], 00:20:46.919 | 99.00th=[ 130], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 155], 00:20:46.919 | 99.99th=[ 157] 00:20:46.919 bw ( KiB/s): min= 640, max= 1755, per=4.17%, avg=922.55, stdev=254.18, samples=20 00:20:46.919 iops : min= 160, max= 438, avg=230.60, stdev=63.42, samples=20 00:20:46.919 lat (msec) : 10=0.13%, 20=1.03%, 50=25.19%, 100=59.35%, 250=14.30% 00:20:46.920 cpu : usr=40.38%, sys=1.86%, ctx=1282, majf=0, minf=9 00:20:46.920 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:46.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 issued rwts: total=2322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.920 filename2: (groupid=0, jobs=1): err= 0: pid=83207: Mon Nov 4 10:10:17 2024 00:20:46.920 read: IOPS=221, BW=885KiB/s (907kB/s)(8892KiB/10044msec) 00:20:46.920 slat (usec): min=6, max=8029, avg=28.54, stdev=328.52 00:20:46.920 clat (msec): min=9, max=155, avg=72.13, stdev=26.32 00:20:46.920 lat (msec): min=9, max=155, avg=72.16, stdev=26.32 00:20:46.920 clat percentiles (msec): 00:20:46.920 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.920 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:20:46.920 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 117], 00:20:46.920 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:20:46.920 | 99.99th=[ 157] 00:20:46.920 bw ( KiB/s): min= 608, max= 1544, per=3.99%, avg=882.80, stdev=213.96, samples=20 00:20:46.920 iops : min= 152, max= 386, avg=220.70, stdev=53.49, samples=20 00:20:46.920 lat (msec) : 10=0.13%, 20=2.11%, 50=22.18%, 100=60.05%, 250=15.52% 00:20:46.920 cpu : usr=33.59%, sys=1.45%, ctx=984, majf=0, minf=9 00:20:46.920 IO depths : 1=0.1%, 2=0.6%, 4=2.1%, 8=80.4%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:46.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.920 filename2: (groupid=0, jobs=1): err= 0: pid=83208: Mon Nov 4 10:10:17 2024 00:20:46.920 read: IOPS=232, BW=930KiB/s (952kB/s)(9320KiB/10025msec) 00:20:46.920 slat (usec): min=5, max=4032, avg=19.78, stdev=117.75 00:20:46.920 clat (msec): min=12, max=151, avg=68.69, stdev=25.76 00:20:46.920 lat (msec): min=12, max=151, avg=68.71, stdev=25.76 00:20:46.920 clat percentiles (msec): 00:20:46.920 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 48], 00:20:46.920 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 74], 00:20:46.920 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 109], 95.00th=[ 116], 00:20:46.920 | 99.00th=[ 128], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:20:46.920 | 99.99th=[ 153] 00:20:46.920 bw ( KiB/s): min= 664, max= 1520, per=4.19%, avg=928.40, stdev=210.53, samples=20 00:20:46.920 iops : min= 166, max= 380, avg=232.10, stdev=52.63, samples=20 00:20:46.920 lat (msec) : 20=0.64%, 50=27.30%, 100=58.28%, 250=13.78% 00:20:46.920 cpu : usr=40.93%, sys=1.74%, ctx=1250, majf=0, minf=9 00:20:46.920 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:46.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.920 filename2: (groupid=0, jobs=1): err= 0: pid=83209: Mon Nov 4 10:10:17 2024 00:20:46.920 read: IOPS=230, BW=923KiB/s (945kB/s)(9240KiB/10013msec) 00:20:46.920 slat (usec): min=4, max=8029, avg=30.75, stdev=343.39 00:20:46.920 clat (msec): min=11, max=155, avg=69.23, stdev=25.92 00:20:46.920 lat (msec): min=11, max=155, avg=69.26, stdev=25.93 00:20:46.920 clat percentiles (msec): 00:20:46.920 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.920 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 73], 00:20:46.920 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 121], 00:20:46.920 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:20:46.920 | 99.99th=[ 157] 00:20:46.920 bw ( KiB/s): min= 592, max= 1440, per=4.15%, avg=919.70, stdev=202.98, samples=20 00:20:46.920 iops : min= 148, max= 360, avg=229.90, stdev=50.78, samples=20 00:20:46.920 lat (msec) : 20=0.61%, 50=29.74%, 100=55.54%, 250=14.11% 00:20:46.920 cpu : usr=32.86%, sys=1.72%, ctx=935, majf=0, minf=9 00:20:46.920 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:46.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.920 filename2: (groupid=0, jobs=1): err= 0: pid=83210: Mon Nov 4 10:10:17 2024 00:20:46.920 read: IOPS=230, BW=922KiB/s (945kB/s)(9228KiB/10004msec) 00:20:46.920 slat (usec): min=5, max=8032, avg=29.58, stdev=312.03 00:20:46.920 clat (msec): min=3, max=144, avg=69.23, stdev=25.61 00:20:46.920 lat (msec): min=3, max=144, avg=69.26, stdev=25.63 00:20:46.920 clat percentiles (msec): 00:20:46.920 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 48], 00:20:46.920 | 30.00th=[ 54], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:20:46.920 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 116], 00:20:46.920 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:20:46.920 | 99.99th=[ 144] 00:20:46.920 bw ( KiB/s): min= 640, max= 1253, per=4.03%, avg=891.95, stdev=180.05, samples=19 00:20:46.920 iops : min= 160, max= 313, avg=222.95, stdev=45.01, samples=19 00:20:46.920 lat (msec) : 4=0.13%, 10=1.52%, 20=1.26%, 50=23.28%, 100=61.21% 00:20:46.920 lat (msec) : 250=12.61% 00:20:46.920 cpu : usr=34.80%, sys=1.54%, ctx=1070, majf=0, minf=9 00:20:46.920 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:46.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.920 issued rwts: total=2307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.920 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:46.920 00:20:46.920 Run status group 0 (all jobs): 00:20:46.920 READ: bw=21.6MiB/s (22.7MB/s), 864KiB/s-966KiB/s (885kB/s-989kB/s), io=218MiB (228MB), run=10001-10072msec 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:46.920 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 bdev_null0 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 [2024-11-04 10:10:17.498510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 bdev_null1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.921 { 00:20:46.921 "params": { 00:20:46.921 "name": "Nvme$subsystem", 00:20:46.921 "trtype": "$TEST_TRANSPORT", 00:20:46.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.921 "adrfam": "ipv4", 00:20:46.921 "trsvcid": "$NVMF_PORT", 00:20:46.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.921 "hdgst": ${hdgst:-false}, 00:20:46.921 "ddgst": ${ddgst:-false} 00:20:46.921 }, 00:20:46.921 "method": "bdev_nvme_attach_controller" 00:20:46.921 } 00:20:46.921 EOF 00:20:46.921 )") 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.921 { 00:20:46.921 "params": { 00:20:46.921 "name": "Nvme$subsystem", 00:20:46.921 "trtype": "$TEST_TRANSPORT", 00:20:46.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.921 "adrfam": "ipv4", 00:20:46.921 "trsvcid": "$NVMF_PORT", 00:20:46.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.921 "hdgst": ${hdgst:-false}, 00:20:46.921 "ddgst": ${ddgst:-false} 00:20:46.921 }, 00:20:46.921 "method": "bdev_nvme_attach_controller" 00:20:46.921 } 00:20:46.921 EOF 00:20:46.921 )") 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.921 "params": { 00:20:46.921 "name": "Nvme0", 00:20:46.921 "trtype": "tcp", 00:20:46.921 "traddr": "10.0.0.3", 00:20:46.921 "adrfam": "ipv4", 00:20:46.921 "trsvcid": "4420", 00:20:46.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:46.921 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:46.921 "hdgst": false, 00:20:46.921 "ddgst": false 00:20:46.921 }, 00:20:46.921 "method": "bdev_nvme_attach_controller" 00:20:46.921 },{ 00:20:46.921 "params": { 00:20:46.921 "name": "Nvme1", 00:20:46.921 "trtype": "tcp", 00:20:46.921 "traddr": "10.0.0.3", 00:20:46.921 "adrfam": "ipv4", 00:20:46.921 "trsvcid": "4420", 00:20:46.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.921 "hdgst": false, 00:20:46.921 "ddgst": false 00:20:46.921 }, 00:20:46.921 "method": "bdev_nvme_attach_controller" 00:20:46.921 }' 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:46.921 10:10:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:46.921 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:46.921 ... 00:20:46.921 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:46.921 ... 00:20:46.922 fio-3.35 00:20:46.922 Starting 4 threads 00:20:52.192 00:20:52.192 filename0: (groupid=0, jobs=1): err= 0: pid=83352: Mon Nov 4 10:10:23 2024 00:20:52.192 read: IOPS=2093, BW=16.4MiB/s (17.1MB/s)(81.8MiB/5002msec) 00:20:52.192 slat (nsec): min=4805, max=60590, avg=14236.15, stdev=4259.91 00:20:52.192 clat (usec): min=717, max=9130, avg=3780.84, stdev=994.93 00:20:52.192 lat (usec): min=726, max=9143, avg=3795.07, stdev=995.56 00:20:52.192 clat percentiles (usec): 00:20:52.192 | 1.00th=[ 1385], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2737], 00:20:52.192 | 30.00th=[ 3326], 40.00th=[ 3556], 50.00th=[ 4047], 60.00th=[ 4228], 00:20:52.192 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5080], 00:20:52.192 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 6259], 99.95th=[ 6259], 00:20:52.192 | 99.99th=[ 7242] 00:20:52.192 bw ( KiB/s): min=13210, max=18896, per=26.25%, avg=16893.56, stdev=1994.55, samples=9 00:20:52.192 iops : min= 1651, max= 2362, avg=2111.67, stdev=249.38, samples=9 00:20:52.192 lat (usec) : 750=0.03%, 1000=0.49% 00:20:52.192 lat (msec) : 2=2.59%, 4=43.98%, 10=52.91% 00:20:52.192 cpu : usr=90.46%, sys=8.48%, ctx=21, majf=0, minf=0 00:20:52.192 IO depths : 1=0.1%, 2=7.0%, 4=61.6%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 complete : 0=0.0%, 4=97.3%, 8=2.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 issued rwts: total=10472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:52.192 filename0: (groupid=0, jobs=1): err= 0: pid=83353: Mon Nov 4 10:10:23 2024 00:20:52.192 read: IOPS=1964, BW=15.3MiB/s (16.1MB/s)(76.8MiB/5003msec) 00:20:52.192 slat (nsec): min=6330, max=65493, avg=13129.85, stdev=4532.02 00:20:52.192 clat (usec): min=1066, max=6445, avg=4028.23, stdev=839.69 00:20:52.192 lat (usec): min=1075, max=6457, avg=4041.36, stdev=839.88 00:20:52.192 clat percentiles (usec): 00:20:52.192 | 1.00th=[ 1500], 5.00th=[ 2278], 10.00th=[ 2638], 20.00th=[ 3326], 00:20:52.192 | 30.00th=[ 3654], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4490], 00:20:52.192 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5014], 00:20:52.192 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 5669], 99.95th=[ 5866], 00:20:52.192 | 99.99th=[ 6456] 00:20:52.192 bw ( KiB/s): min=13440, max=18496, per=24.02%, avg=15457.78, stdev=1699.25, samples=9 00:20:52.192 iops : min= 1680, max= 2312, avg=1932.22, stdev=212.41, samples=9 00:20:52.192 lat (msec) : 2=1.89%, 4=34.52%, 10=63.59% 00:20:52.192 cpu : usr=90.58%, sys=8.52%, ctx=5, majf=0, minf=0 00:20:52.192 IO depths : 1=0.1%, 2=12.4%, 4=59.0%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 issued rwts: total=9829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:52.192 filename1: (groupid=0, jobs=1): err= 0: pid=83354: Mon Nov 4 10:10:23 2024 00:20:52.192 read: IOPS=2050, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5002msec) 00:20:52.192 slat (nsec): min=4996, max=57588, avg=15157.30, stdev=4380.58 00:20:52.192 clat (usec): min=1072, max=7042, avg=3855.21, stdev=903.52 00:20:52.192 lat (usec): min=1086, max=7057, avg=3870.37, stdev=903.26 00:20:52.192 clat percentiles (usec): 00:20:52.192 | 1.00th=[ 2114], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 3032], 00:20:52.192 | 30.00th=[ 3359], 40.00th=[ 3851], 50.00th=[ 4146], 60.00th=[ 4293], 00:20:52.192 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5080], 00:20:52.192 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5800], 99.95th=[ 6718], 00:20:52.192 | 99.99th=[ 6783] 00:20:52.192 bw ( KiB/s): min=13184, max=18896, per=25.66%, avg=16512.00, stdev=1960.60, samples=9 00:20:52.192 iops : min= 1648, max= 2362, avg=2064.00, stdev=245.08, samples=9 00:20:52.192 lat (msec) : 2=0.64%, 4=42.59%, 10=56.77% 00:20:52.192 cpu : usr=91.74%, sys=7.32%, ctx=4, majf=0, minf=0 00:20:52.192 IO depths : 1=0.1%, 2=8.8%, 4=60.8%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 issued rwts: total=10259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:52.192 filename1: (groupid=0, jobs=1): err= 0: pid=83355: Mon Nov 4 10:10:23 2024 00:20:52.192 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5003msec) 00:20:52.192 slat (nsec): min=7220, max=82173, avg=14780.45, stdev=4783.74 00:20:52.192 clat (usec): min=1016, max=7244, avg=4086.12, stdev=778.60 00:20:52.192 lat (usec): min=1024, max=7258, avg=4100.90, stdev=778.39 00:20:52.192 clat percentiles (usec): 00:20:52.192 | 1.00th=[ 1991], 5.00th=[ 2409], 10.00th=[ 3064], 20.00th=[ 3359], 00:20:52.192 | 30.00th=[ 3851], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4555], 00:20:52.192 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:20:52.192 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 5604], 99.95th=[ 5800], 00:20:52.192 | 99.99th=[ 7242] 00:20:52.192 bw ( KiB/s): min=13440, max=18288, per=23.99%, avg=15434.67, stdev=1653.89, samples=9 00:20:52.192 iops : min= 1680, max= 2286, avg=1929.33, stdev=206.74, samples=9 00:20:52.192 lat (msec) : 2=1.01%, 4=32.49%, 10=66.50% 00:20:52.192 cpu : usr=90.74%, sys=8.26%, ctx=181, majf=0, minf=0 00:20:52.192 IO depths : 1=0.1%, 2=13.6%, 4=58.4%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.192 issued rwts: total=9678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.192 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:52.192 00:20:52.192 Run status group 0 (all jobs): 00:20:52.192 READ: bw=62.8MiB/s (65.9MB/s), 15.1MiB/s-16.4MiB/s (15.8MB/s-17.1MB/s), io=314MiB (330MB), run=5002-5003msec 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 ************************************ 00:20:52.192 END TEST fio_dif_rand_params 00:20:52.192 ************************************ 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.192 00:20:52.192 real 0m23.853s 00:20:52.192 user 2m4.337s 00:20:52.192 sys 0m7.706s 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:52.192 10:10:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:52.192 10:10:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:52.192 10:10:23 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:52.192 10:10:23 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:52.192 10:10:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:52.193 ************************************ 00:20:52.193 START TEST fio_dif_digest 00:20:52.193 ************************************ 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:52.193 bdev_null0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:52.193 [2024-11-04 10:10:23.737491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:52.193 { 00:20:52.193 "params": { 00:20:52.193 "name": "Nvme$subsystem", 00:20:52.193 "trtype": "$TEST_TRANSPORT", 00:20:52.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.193 "adrfam": "ipv4", 00:20:52.193 "trsvcid": "$NVMF_PORT", 00:20:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.193 "hdgst": ${hdgst:-false}, 00:20:52.193 "ddgst": ${ddgst:-false} 00:20:52.193 }, 00:20:52.193 "method": "bdev_nvme_attach_controller" 00:20:52.193 } 00:20:52.193 EOF 00:20:52.193 )") 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:52.193 "params": { 00:20:52.193 "name": "Nvme0", 00:20:52.193 "trtype": "tcp", 00:20:52.193 "traddr": "10.0.0.3", 00:20:52.193 "adrfam": "ipv4", 00:20:52.193 "trsvcid": "4420", 00:20:52.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:52.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:52.193 "hdgst": true, 00:20:52.193 "ddgst": true 00:20:52.193 }, 00:20:52.193 "method": "bdev_nvme_attach_controller" 00:20:52.193 }' 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.193 10:10:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.193 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:52.193 ... 00:20:52.193 fio-3.35 00:20:52.193 Starting 3 threads 00:21:04.402 00:21:04.402 filename0: (groupid=0, jobs=1): err= 0: pid=83461: Mon Nov 4 10:10:34 2024 00:21:04.402 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10001msec) 00:21:04.402 slat (nsec): min=6803, max=60733, avg=13902.37, stdev=5097.46 00:21:04.402 clat (usec): min=11347, max=14517, avg=13075.70, stdev=420.17 00:21:04.402 lat (usec): min=11355, max=14531, avg=13089.60, stdev=420.43 00:21:04.402 clat percentiles (usec): 00:21:04.402 | 1.00th=[11994], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:21:04.402 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:21:04.402 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:21:04.402 | 99.00th=[14091], 99.50th=[14222], 99.90th=[14484], 99.95th=[14484], 00:21:04.402 | 99.99th=[14484] 00:21:04.402 bw ( KiB/s): min=28359, max=29952, per=33.35%, avg=29302.26, stdev=468.68, samples=19 00:21:04.402 iops : min= 221, max= 234, avg=228.89, stdev= 3.73, samples=19 00:21:04.402 lat (msec) : 20=100.00% 00:21:04.402 cpu : usr=92.04%, sys=7.41%, ctx=23, majf=0, minf=0 00:21:04.402 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.402 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:04.402 filename0: (groupid=0, jobs=1): err= 0: pid=83462: Mon Nov 4 10:10:34 2024 00:21:04.402 read: IOPS=229, BW=28.6MiB/s (30.0MB/s)(287MiB/10007msec) 00:21:04.402 slat (nsec): min=6987, max=58179, avg=12948.91, stdev=7484.87 00:21:04.402 clat (usec): min=6014, max=14434, avg=13064.66, stdev=486.64 00:21:04.402 lat (usec): min=6023, max=14473, avg=13077.61, stdev=486.90 00:21:04.402 clat percentiles (usec): 00:21:04.402 | 1.00th=[11994], 5.00th=[12387], 10.00th=[12518], 20.00th=[12649], 00:21:04.402 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:21:04.402 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:21:04.402 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14353], 99.95th=[14484], 00:21:04.402 | 99.99th=[14484] 00:21:04.402 bw ( KiB/s): min=28416, max=29952, per=33.34%, avg=29299.20, stdev=515.19, samples=20 00:21:04.402 iops : min= 222, max= 234, avg=228.90, stdev= 4.02, samples=20 00:21:04.402 lat (msec) : 10=0.13%, 20=99.87% 00:21:04.402 cpu : usr=89.88%, sys=9.34%, ctx=10, majf=0, minf=0 00:21:04.402 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.402 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:04.402 filename0: (groupid=0, jobs=1): err= 0: pid=83463: Mon Nov 4 10:10:34 2024 00:21:04.402 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10001msec) 00:21:04.402 slat (usec): min=7, max=108, avg=16.69, stdev= 9.29 00:21:04.402 clat (usec): min=9707, max=14718, avg=13066.57, stdev=435.74 00:21:04.402 lat (usec): min=9716, max=14781, avg=13083.26, stdev=436.28 00:21:04.402 clat percentiles (usec): 00:21:04.402 | 1.00th=[11994], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:21:04.402 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:21:04.402 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13698], 00:21:04.402 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14746], 99.95th=[14746], 00:21:04.402 | 99.99th=[14746] 00:21:04.402 bw ( KiB/s): min=28416, max=29952, per=33.35%, avg=29305.26, stdev=462.44, samples=19 00:21:04.402 iops : min= 222, max= 234, avg=228.95, stdev= 3.61, samples=19 00:21:04.402 lat (msec) : 10=0.13%, 20=99.87% 00:21:04.402 cpu : usr=92.01%, sys=7.25%, ctx=261, majf=0, minf=0 00:21:04.402 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:04.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.402 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:04.402 00:21:04.402 Run status group 0 (all jobs): 00:21:04.402 READ: bw=85.8MiB/s (90.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=859MiB (900MB), run=10001-10007msec 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:04.402 ************************************ 00:21:04.402 END TEST fio_dif_digest 00:21:04.402 ************************************ 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.402 00:21:04.402 real 0m11.083s 00:21:04.402 user 0m28.107s 00:21:04.402 sys 0m2.696s 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:04.402 10:10:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:04.402 10:10:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:04.402 10:10:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.402 rmmod nvme_tcp 00:21:04.402 rmmod nvme_fabrics 00:21:04.402 rmmod nvme_keyring 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82714 ']' 00:21:04.402 10:10:34 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82714 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 82714 ']' 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 82714 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82714 00:21:04.402 killing process with pid 82714 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82714' 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@971 -- # kill 82714 00:21:04.402 10:10:34 nvmf_dif -- common/autotest_common.sh@976 -- # wait 82714 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:04.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:04.402 Waiting for block devices as requested 00:21:04.402 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.402 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:04.402 10:10:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.403 10:10:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:04.403 10:10:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.403 10:10:36 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:04.403 00:21:04.403 real 1m0.150s 00:21:04.403 user 3m49.030s 00:21:04.403 sys 0m19.087s 00:21:04.403 10:10:36 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:04.403 10:10:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:04.403 ************************************ 00:21:04.403 END TEST nvmf_dif 00:21:04.403 ************************************ 00:21:04.403 10:10:36 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:04.403 10:10:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:04.403 10:10:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:04.403 10:10:36 -- common/autotest_common.sh@10 -- # set +x 00:21:04.403 ************************************ 00:21:04.403 START TEST nvmf_abort_qd_sizes 00:21:04.403 ************************************ 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:04.403 * Looking for test storage... 00:21:04.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.403 --rc genhtml_branch_coverage=1 00:21:04.403 --rc genhtml_function_coverage=1 00:21:04.403 --rc genhtml_legend=1 00:21:04.403 --rc geninfo_all_blocks=1 00:21:04.403 --rc geninfo_unexecuted_blocks=1 00:21:04.403 00:21:04.403 ' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.403 --rc genhtml_branch_coverage=1 00:21:04.403 --rc genhtml_function_coverage=1 00:21:04.403 --rc genhtml_legend=1 00:21:04.403 --rc geninfo_all_blocks=1 00:21:04.403 --rc geninfo_unexecuted_blocks=1 00:21:04.403 00:21:04.403 ' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.403 --rc genhtml_branch_coverage=1 00:21:04.403 --rc genhtml_function_coverage=1 00:21:04.403 --rc genhtml_legend=1 00:21:04.403 --rc geninfo_all_blocks=1 00:21:04.403 --rc geninfo_unexecuted_blocks=1 00:21:04.403 00:21:04.403 ' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:04.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.403 --rc genhtml_branch_coverage=1 00:21:04.403 --rc genhtml_function_coverage=1 00:21:04.403 --rc genhtml_legend=1 00:21:04.403 --rc geninfo_all_blocks=1 00:21:04.403 --rc geninfo_unexecuted_blocks=1 00:21:04.403 00:21:04.403 ' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.403 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:04.403 10:10:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:04.404 Cannot find device "nvmf_init_br" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:04.404 Cannot find device "nvmf_init_br2" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:04.404 Cannot find device "nvmf_tgt_br" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.404 Cannot find device "nvmf_tgt_br2" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:04.404 Cannot find device "nvmf_init_br" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:04.404 Cannot find device "nvmf_init_br2" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:04.404 Cannot find device "nvmf_tgt_br" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:04.404 Cannot find device "nvmf_tgt_br2" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:04.404 Cannot find device "nvmf_br" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:04.404 Cannot find device "nvmf_init_if" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:04.404 Cannot find device "nvmf_init_if2" 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:04.404 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:04.663 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:04.663 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:21:04.663 00:21:04.663 --- 10.0.0.3 ping statistics --- 00:21:04.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.663 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:04.663 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:04.663 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:21:04.663 00:21:04.663 --- 10.0.0.4 ping statistics --- 00:21:04.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.663 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:04.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:04.663 00:21:04.663 --- 10.0.0.1 ping statistics --- 00:21:04.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.663 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:04.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:04.663 00:21:04.663 --- 10.0.0.2 ping statistics --- 00:21:04.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.663 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:04.663 10:10:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:05.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.490 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:05.490 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:05.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84113 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84113 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84113 ']' 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.490 10:10:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:05.748 [2024-11-04 10:10:37.695704] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:21:05.748 [2024-11-04 10:10:37.696011] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.748 [2024-11-04 10:10:37.855799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.006 [2024-11-04 10:10:37.941257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.006 [2024-11-04 10:10:37.941731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.006 [2024-11-04 10:10:37.941983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.006 [2024-11-04 10:10:37.942133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.006 [2024-11-04 10:10:37.942230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.006 [2024-11-04 10:10:37.943783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.006 [2024-11-04 10:10:37.944178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.006 [2024-11-04 10:10:37.944231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.006 [2024-11-04 10:10:37.944240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.006 [2024-11-04 10:10:38.023134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:06.006 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:06.007 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:06.266 10:10:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:06.266 ************************************ 00:21:06.266 START TEST spdk_target_abort 00:21:06.266 ************************************ 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:06.266 spdk_targetn1 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:06.266 [2024-11-04 10:10:38.271529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:06.266 [2024-11-04 10:10:38.309739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:06.266 10:10:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:09.548 Initializing NVMe Controllers 00:21:09.548 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:09.548 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:09.548 Initialization complete. Launching workers. 00:21:09.548 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10889, failed: 0 00:21:09.548 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1030, failed to submit 9859 00:21:09.548 success 732, unsuccessful 298, failed 0 00:21:09.548 10:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:09.548 10:10:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:12.832 Initializing NVMe Controllers 00:21:12.832 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:12.832 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:12.832 Initialization complete. Launching workers. 00:21:12.832 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8926, failed: 0 00:21:12.832 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1129, failed to submit 7797 00:21:12.832 success 408, unsuccessful 721, failed 0 00:21:12.832 10:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:12.832 10:10:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:16.118 Initializing NVMe Controllers 00:21:16.118 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:16.118 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:16.118 Initialization complete. Launching workers. 00:21:16.118 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31163, failed: 0 00:21:16.118 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2369, failed to submit 28794 00:21:16.118 success 437, unsuccessful 1932, failed 0 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.118 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84113 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84113 ']' 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84113 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84113 00:21:16.691 killing process with pid 84113 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84113' 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84113 00:21:16.691 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84113 00:21:16.973 00:21:16.973 real 0m10.767s 00:21:16.973 user 0m41.060s 00:21:16.973 sys 0m2.177s 00:21:16.973 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.973 ************************************ 00:21:16.973 10:10:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.973 END TEST spdk_target_abort 00:21:16.973 ************************************ 00:21:16.973 10:10:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:16.973 10:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:16.973 10:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.973 10:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:16.973 ************************************ 00:21:16.973 START TEST kernel_target_abort 00:21:16.973 ************************************ 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:16.973 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:17.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:17.491 Waiting for block devices as requested 00:21:17.491 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:17.491 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:17.748 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:17.749 No valid GPT data, bailing 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:17.749 No valid GPT data, bailing 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:17.749 No valid GPT data, bailing 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:17.749 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:18.007 No valid GPT data, bailing 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:21:18.007 10:10:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 --hostid=89901d6b-8f02-4106-8c0e-f8e118ca6735 -a 10.0.0.1 -t tcp -s 4420 00:21:18.007 00:21:18.007 Discovery Log Number of Records 2, Generation counter 2 00:21:18.007 =====Discovery Log Entry 0====== 00:21:18.007 trtype: tcp 00:21:18.007 adrfam: ipv4 00:21:18.007 subtype: current discovery subsystem 00:21:18.007 treq: not specified, sq flow control disable supported 00:21:18.007 portid: 1 00:21:18.007 trsvcid: 4420 00:21:18.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:18.007 traddr: 10.0.0.1 00:21:18.007 eflags: none 00:21:18.007 sectype: none 00:21:18.007 =====Discovery Log Entry 1====== 00:21:18.007 trtype: tcp 00:21:18.007 adrfam: ipv4 00:21:18.007 subtype: nvme subsystem 00:21:18.007 treq: not specified, sq flow control disable supported 00:21:18.007 portid: 1 00:21:18.007 trsvcid: 4420 00:21:18.007 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:18.007 traddr: 10.0.0.1 00:21:18.007 eflags: none 00:21:18.007 sectype: none 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:18.007 10:10:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:21.293 Initializing NVMe Controllers 00:21:21.293 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:21.293 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:21.293 Initialization complete. Launching workers. 00:21:21.293 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36026, failed: 0 00:21:21.293 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36026, failed to submit 0 00:21:21.293 success 0, unsuccessful 36026, failed 0 00:21:21.293 10:10:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:21.293 10:10:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:24.579 Initializing NVMe Controllers 00:21:24.579 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:24.579 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:24.579 Initialization complete. Launching workers. 00:21:24.579 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67887, failed: 0 00:21:24.579 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29772, failed to submit 38115 00:21:24.579 success 0, unsuccessful 29772, failed 0 00:21:24.579 10:10:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:24.579 10:10:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:27.867 Initializing NVMe Controllers 00:21:27.867 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:27.867 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:27.867 Initialization complete. Launching workers. 00:21:27.867 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80833, failed: 0 00:21:27.867 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20170, failed to submit 60663 00:21:27.867 success 0, unsuccessful 20170, failed 0 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:27.867 10:10:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:28.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.338 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.339 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.339 00:21:30.339 real 0m13.314s 00:21:30.339 user 0m6.575s 00:21:30.339 sys 0m4.210s 00:21:30.339 10:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:30.339 ************************************ 00:21:30.339 END TEST kernel_target_abort 00:21:30.339 ************************************ 00:21:30.339 10:11:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.339 rmmod nvme_tcp 00:21:30.339 rmmod nvme_fabrics 00:21:30.339 rmmod nvme_keyring 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84113 ']' 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84113 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84113 ']' 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84113 00:21:30.339 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84113) - No such process 00:21:30.339 Process with pid 84113 is not found 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84113 is not found' 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:30.339 10:11:02 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:30.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.905 Waiting for block devices as requested 00:21:30.905 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.905 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:30.905 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:21:31.164 00:21:31.164 real 0m27.185s 00:21:31.164 user 0m48.856s 00:21:31.164 sys 0m7.831s 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:31.164 10:11:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 ************************************ 00:21:31.164 END TEST nvmf_abort_qd_sizes 00:21:31.164 ************************************ 00:21:31.164 10:11:03 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:31.164 10:11:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:31.164 10:11:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:31.164 10:11:03 -- common/autotest_common.sh@10 -- # set +x 00:21:31.164 ************************************ 00:21:31.164 START TEST keyring_file 00:21:31.164 ************************************ 00:21:31.165 10:11:03 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:31.447 * Looking for test storage... 00:21:31.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:31.447 10:11:03 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:31.447 10:11:03 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:21:31.447 10:11:03 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:31.447 10:11:03 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@345 -- # : 1 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.447 10:11:03 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@353 -- # local d=1 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@355 -- # echo 1 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@353 -- # local d=2 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@355 -- # echo 2 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@368 -- # return 0 00:21:31.448 10:11:03 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.448 10:11:03 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 10:11:03 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 10:11:03 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 10:11:03 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:31.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.448 --rc genhtml_branch_coverage=1 00:21:31.448 --rc genhtml_function_coverage=1 00:21:31.448 --rc genhtml_legend=1 00:21:31.448 --rc geninfo_all_blocks=1 00:21:31.448 --rc geninfo_unexecuted_blocks=1 00:21:31.448 00:21:31.448 ' 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.448 10:11:03 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.448 10:11:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.448 10:11:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.448 10:11:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.448 10:11:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:31.448 10:11:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@51 -- # : 0 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:31.448 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VDNvRlNXpI 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:31.448 10:11:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VDNvRlNXpI 00:21:31.448 10:11:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VDNvRlNXpI 00:21:31.448 10:11:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VDNvRlNXpI 00:21:31.721 10:11:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fbLRBrWSRQ 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:31.721 10:11:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:31.721 10:11:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:31.721 10:11:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:31.721 10:11:03 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:31.721 10:11:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:31.721 10:11:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fbLRBrWSRQ 00:21:31.721 10:11:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fbLRBrWSRQ 00:21:31.721 10:11:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.fbLRBrWSRQ 00:21:31.721 10:11:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=85026 00:21:31.721 10:11:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85026 00:21:31.721 10:11:03 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85026 ']' 00:21:31.721 10:11:03 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.721 10:11:03 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.721 10:11:03 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:31.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.721 10:11:03 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.721 10:11:03 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:31.721 10:11:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:31.721 [2024-11-04 10:11:03.717981] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:21:31.721 [2024-11-04 10:11:03.718079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85026 ] 00:21:31.721 [2024-11-04 10:11:03.865305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.979 [2024-11-04 10:11:03.925520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.979 [2024-11-04 10:11:03.998276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:32.238 10:11:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:32.238 [2024-11-04 10:11:04.225325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.238 null0 00:21:32.238 [2024-11-04 10:11:04.257286] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.238 [2024-11-04 10:11:04.257708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.238 10:11:04 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:32.238 [2024-11-04 10:11:04.289312] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:32.238 request: 00:21:32.238 { 00:21:32.238 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.238 "secure_channel": false, 00:21:32.238 "listen_address": { 00:21:32.238 "trtype": "tcp", 00:21:32.238 "traddr": "127.0.0.1", 00:21:32.238 "trsvcid": "4420" 00:21:32.238 }, 00:21:32.238 "method": "nvmf_subsystem_add_listener", 00:21:32.238 "req_id": 1 00:21:32.238 } 00:21:32.238 Got JSON-RPC error response 00:21:32.238 response: 00:21:32.238 { 00:21:32.238 "code": -32602, 00:21:32.238 "message": "Invalid parameters" 00:21:32.238 } 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:32.238 10:11:04 keyring_file -- keyring/file.sh@47 -- # bperfpid=85037 00:21:32.238 10:11:04 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:32.238 10:11:04 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85037 /var/tmp/bperf.sock 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85037 ']' 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:32.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:32.238 10:11:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:32.238 [2024-11-04 10:11:04.363699] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:21:32.238 [2024-11-04 10:11:04.363815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85037 ] 00:21:32.497 [2024-11-04 10:11:04.513957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.497 [2024-11-04 10:11:04.577642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.497 [2024-11-04 10:11:04.637100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.755 10:11:04 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:32.755 10:11:04 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:32.755 10:11:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:32.755 10:11:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:33.014 10:11:05 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fbLRBrWSRQ 00:21:33.014 10:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fbLRBrWSRQ 00:21:33.272 10:11:05 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:21:33.272 10:11:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:33.272 10:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.272 10:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:33.272 10:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.531 10:11:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.VDNvRlNXpI == \/\t\m\p\/\t\m\p\.\V\D\N\v\R\l\N\X\p\I ]] 00:21:33.531 10:11:05 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:21:33.531 10:11:05 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:21:33.531 10:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.531 10:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:33.531 10:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.789 10:11:05 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.fbLRBrWSRQ == \/\t\m\p\/\t\m\p\.\f\b\L\R\B\r\W\S\R\Q ]] 00:21:33.789 10:11:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:21:33.789 10:11:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:33.789 10:11:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:33.789 10:11:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:33.789 10:11:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:33.789 10:11:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:34.361 10:11:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:34.361 10:11:06 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:21:34.361 10:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:34.361 10:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.361 10:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.361 10:11:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.361 10:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:34.361 10:11:06 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:21:34.361 10:11:06 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:34.361 10:11:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:34.925 [2024-11-04 10:11:06.793470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.925 nvme0n1 00:21:34.925 10:11:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:21:34.925 10:11:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:34.925 10:11:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.925 10:11:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.926 10:11:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.926 10:11:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.183 10:11:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:35.183 10:11:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:35.183 10:11:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:35.183 10:11:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.183 10:11:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.183 10:11:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.183 10:11:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:35.442 10:11:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:35.442 10:11:07 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:35.700 Running I/O for 1 seconds... 00:21:36.633 11415.00 IOPS, 44.59 MiB/s 00:21:36.633 Latency(us) 00:21:36.633 [2024-11-04T10:11:08.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.633 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:36.633 nvme0n1 : 1.01 11468.20 44.80 0.00 0.00 11128.76 4438.57 23473.80 00:21:36.633 [2024-11-04T10:11:08.803Z] =================================================================================================================== 00:21:36.633 [2024-11-04T10:11:08.803Z] Total : 11468.20 44.80 0.00 0.00 11128.76 4438.57 23473.80 00:21:36.633 { 00:21:36.633 "results": [ 00:21:36.633 { 00:21:36.633 "job": "nvme0n1", 00:21:36.633 "core_mask": "0x2", 00:21:36.633 "workload": "randrw", 00:21:36.633 "percentage": 50, 00:21:36.633 "status": "finished", 00:21:36.633 "queue_depth": 128, 00:21:36.633 "io_size": 4096, 00:21:36.633 "runtime": 1.006697, 00:21:36.633 "iops": 11468.197481466617, 00:21:36.633 "mibps": 44.797646411978974, 00:21:36.633 "io_failed": 0, 00:21:36.633 "io_timeout": 0, 00:21:36.633 "avg_latency_us": 11128.759268947595, 00:21:36.633 "min_latency_us": 4438.574545454546, 00:21:36.633 "max_latency_us": 23473.803636363635 00:21:36.633 } 00:21:36.633 ], 00:21:36.633 "core_count": 1 00:21:36.633 } 00:21:36.633 10:11:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:36.633 10:11:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:36.893 10:11:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:36.893 10:11:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:36.893 10:11:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:36.893 10:11:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:36.893 10:11:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.893 10:11:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:37.460 10:11:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:37.460 10:11:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:37.460 10:11:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:37.460 10:11:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.460 10:11:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.460 10:11:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.460 10:11:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:37.719 10:11:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:37.719 10:11:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:37.719 10:11:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.719 10:11:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.977 [2024-11-04 10:11:09.973566] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:37.977 [2024-11-04 10:11:09.974546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b4e70 (107): Transport endpoint is not connected 00:21:37.977 [2024-11-04 10:11:09.975536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b4e70 (9): Bad file descriptor 00:21:37.977 [2024-11-04 10:11:09.976534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:37.977 [2024-11-04 10:11:09.976549] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:37.977 [2024-11-04 10:11:09.976559] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:37.977 [2024-11-04 10:11:09.976571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:37.977 request: 00:21:37.977 { 00:21:37.977 "name": "nvme0", 00:21:37.977 "trtype": "tcp", 00:21:37.977 "traddr": "127.0.0.1", 00:21:37.977 "adrfam": "ipv4", 00:21:37.977 "trsvcid": "4420", 00:21:37.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:37.977 "prchk_reftag": false, 00:21:37.977 "prchk_guard": false, 00:21:37.977 "hdgst": false, 00:21:37.977 "ddgst": false, 00:21:37.977 "psk": "key1", 00:21:37.977 "allow_unrecognized_csi": false, 00:21:37.977 "method": "bdev_nvme_attach_controller", 00:21:37.977 "req_id": 1 00:21:37.977 } 00:21:37.977 Got JSON-RPC error response 00:21:37.977 response: 00:21:37.977 { 00:21:37.977 "code": -5, 00:21:37.977 "message": "Input/output error" 00:21:37.977 } 00:21:37.977 10:11:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:37.977 10:11:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:37.977 10:11:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:37.977 10:11:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:37.977 10:11:10 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:37.977 10:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:37.977 10:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.977 10:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.977 10:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.977 10:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:38.278 10:11:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:38.278 10:11:10 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:38.278 10:11:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.278 10:11:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:38.278 10:11:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.278 10:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.278 10:11:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:38.543 10:11:10 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:38.543 10:11:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:38.544 10:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:38.802 10:11:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:38.802 10:11:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:39.060 10:11:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:39.060 10:11:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.060 10:11:11 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:39.318 10:11:11 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:39.318 10:11:11 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.VDNvRlNXpI 00:21:39.318 10:11:11 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.318 10:11:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:39.318 10:11:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:39.886 [2024-11-04 10:11:11.760954] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VDNvRlNXpI': 0100660 00:21:39.886 [2024-11-04 10:11:11.761212] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:39.886 request: 00:21:39.886 { 00:21:39.886 "name": "key0", 00:21:39.886 "path": "/tmp/tmp.VDNvRlNXpI", 00:21:39.886 "method": "keyring_file_add_key", 00:21:39.886 "req_id": 1 00:21:39.886 } 00:21:39.886 Got JSON-RPC error response 00:21:39.886 response: 00:21:39.886 { 00:21:39.886 "code": -1, 00:21:39.886 "message": "Operation not permitted" 00:21:39.886 } 00:21:39.886 10:11:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:39.886 10:11:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.886 10:11:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.886 10:11:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.886 10:11:11 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.VDNvRlNXpI 00:21:39.886 10:11:11 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:39.886 10:11:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VDNvRlNXpI 00:21:40.146 10:11:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.VDNvRlNXpI 00:21:40.146 10:11:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:40.146 10:11:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:40.146 10:11:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:40.146 10:11:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:40.146 10:11:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:40.146 10:11:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:40.405 10:11:12 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:40.405 10:11:12 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:40.405 10:11:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.405 10:11:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.665 [2024-11-04 10:11:12.685407] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VDNvRlNXpI': No such file or directory 00:21:40.665 [2024-11-04 10:11:12.685453] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:40.665 [2024-11-04 10:11:12.685474] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:40.665 [2024-11-04 10:11:12.685484] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:40.665 [2024-11-04 10:11:12.685495] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:40.665 [2024-11-04 10:11:12.685504] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:40.665 request: 00:21:40.665 { 00:21:40.665 "name": "nvme0", 00:21:40.665 "trtype": "tcp", 00:21:40.665 "traddr": "127.0.0.1", 00:21:40.665 "adrfam": "ipv4", 00:21:40.665 "trsvcid": "4420", 00:21:40.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:40.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:40.665 "prchk_reftag": false, 00:21:40.665 "prchk_guard": false, 00:21:40.665 "hdgst": false, 00:21:40.665 "ddgst": false, 00:21:40.665 "psk": "key0", 00:21:40.665 "allow_unrecognized_csi": false, 00:21:40.665 "method": "bdev_nvme_attach_controller", 00:21:40.665 "req_id": 1 00:21:40.665 } 00:21:40.665 Got JSON-RPC error response 00:21:40.665 response: 00:21:40.665 { 00:21:40.665 "code": -19, 00:21:40.665 "message": "No such device" 00:21:40.665 } 00:21:40.665 10:11:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:40.665 10:11:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:40.665 10:11:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:40.665 10:11:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:40.665 10:11:12 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:40.665 10:11:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:40.924 10:11:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.006EvgiQf6 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:40.924 10:11:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:40.924 10:11:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:40.924 10:11:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:40.924 10:11:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:40.924 10:11:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:40.924 10:11:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.006EvgiQf6 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.006EvgiQf6 00:21:40.924 10:11:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.006EvgiQf6 00:21:40.924 10:11:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.006EvgiQf6 00:21:40.924 10:11:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.006EvgiQf6 00:21:41.182 10:11:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:41.182 10:11:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:41.750 nvme0n1 00:21:41.750 10:11:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:41.750 10:11:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:41.750 10:11:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.750 10:11:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.750 10:11:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.750 10:11:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:42.010 10:11:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:42.010 10:11:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:42.010 10:11:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:42.268 10:11:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:42.269 10:11:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:42.269 10:11:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.269 10:11:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.269 10:11:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:42.527 10:11:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:42.527 10:11:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:42.527 10:11:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:42.527 10:11:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:42.527 10:11:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:42.527 10:11:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.527 10:11:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:42.785 10:11:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:42.785 10:11:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:42.785 10:11:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:43.044 10:11:15 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:43.044 10:11:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.044 10:11:15 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:43.303 10:11:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:43.303 10:11:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.006EvgiQf6 00:21:43.303 10:11:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.006EvgiQf6 00:21:43.561 10:11:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fbLRBrWSRQ 00:21:43.561 10:11:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fbLRBrWSRQ 00:21:44.129 10:11:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:44.129 10:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:44.387 nvme0n1 00:21:44.387 10:11:16 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:44.387 10:11:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:44.646 10:11:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:44.646 "subsystems": [ 00:21:44.646 { 00:21:44.646 "subsystem": "keyring", 00:21:44.646 "config": [ 00:21:44.646 { 00:21:44.646 "method": "keyring_file_add_key", 00:21:44.646 "params": { 00:21:44.646 "name": "key0", 00:21:44.646 "path": "/tmp/tmp.006EvgiQf6" 00:21:44.646 } 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "method": "keyring_file_add_key", 00:21:44.646 "params": { 00:21:44.646 "name": "key1", 00:21:44.646 "path": "/tmp/tmp.fbLRBrWSRQ" 00:21:44.646 } 00:21:44.646 } 00:21:44.646 ] 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "subsystem": "iobuf", 00:21:44.646 "config": [ 00:21:44.646 { 00:21:44.646 "method": "iobuf_set_options", 00:21:44.646 "params": { 00:21:44.646 "small_pool_count": 8192, 00:21:44.646 "large_pool_count": 1024, 00:21:44.646 "small_bufsize": 8192, 00:21:44.646 "large_bufsize": 135168, 00:21:44.646 "enable_numa": false 00:21:44.646 } 00:21:44.646 } 00:21:44.646 ] 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "subsystem": "sock", 00:21:44.646 "config": [ 00:21:44.646 { 00:21:44.646 "method": "sock_set_default_impl", 00:21:44.646 "params": { 00:21:44.646 "impl_name": "uring" 00:21:44.646 } 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "method": "sock_impl_set_options", 00:21:44.646 "params": { 00:21:44.646 "impl_name": "ssl", 00:21:44.646 "recv_buf_size": 4096, 00:21:44.646 "send_buf_size": 4096, 00:21:44.646 "enable_recv_pipe": true, 00:21:44.646 "enable_quickack": false, 00:21:44.646 "enable_placement_id": 0, 00:21:44.646 "enable_zerocopy_send_server": true, 00:21:44.646 "enable_zerocopy_send_client": false, 00:21:44.646 "zerocopy_threshold": 0, 00:21:44.646 "tls_version": 0, 00:21:44.646 "enable_ktls": false 00:21:44.646 } 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "method": "sock_impl_set_options", 00:21:44.646 "params": { 00:21:44.646 "impl_name": "posix", 00:21:44.646 "recv_buf_size": 2097152, 00:21:44.646 "send_buf_size": 2097152, 00:21:44.646 "enable_recv_pipe": true, 00:21:44.646 "enable_quickack": false, 00:21:44.646 "enable_placement_id": 0, 00:21:44.646 "enable_zerocopy_send_server": true, 00:21:44.646 "enable_zerocopy_send_client": false, 00:21:44.646 "zerocopy_threshold": 0, 00:21:44.646 "tls_version": 0, 00:21:44.646 "enable_ktls": false 00:21:44.646 } 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "method": "sock_impl_set_options", 00:21:44.646 "params": { 00:21:44.646 "impl_name": "uring", 00:21:44.646 "recv_buf_size": 2097152, 00:21:44.646 "send_buf_size": 2097152, 00:21:44.646 "enable_recv_pipe": true, 00:21:44.646 "enable_quickack": false, 00:21:44.646 "enable_placement_id": 0, 00:21:44.646 "enable_zerocopy_send_server": false, 00:21:44.646 "enable_zerocopy_send_client": false, 00:21:44.646 "zerocopy_threshold": 0, 00:21:44.646 "tls_version": 0, 00:21:44.646 "enable_ktls": false 00:21:44.646 } 00:21:44.646 } 00:21:44.646 ] 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "subsystem": "vmd", 00:21:44.646 "config": [] 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "subsystem": "accel", 00:21:44.646 "config": [ 00:21:44.646 { 00:21:44.646 "method": "accel_set_options", 00:21:44.646 "params": { 00:21:44.646 "small_cache_size": 128, 00:21:44.646 "large_cache_size": 16, 00:21:44.646 "task_count": 2048, 00:21:44.646 "sequence_count": 2048, 00:21:44.646 "buf_count": 2048 00:21:44.646 } 00:21:44.646 } 00:21:44.646 ] 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "subsystem": "bdev", 00:21:44.646 "config": [ 00:21:44.646 { 00:21:44.646 "method": "bdev_set_options", 00:21:44.646 "params": { 00:21:44.646 "bdev_io_pool_size": 65535, 00:21:44.646 "bdev_io_cache_size": 256, 00:21:44.646 "bdev_auto_examine": true, 00:21:44.646 "iobuf_small_cache_size": 128, 00:21:44.646 "iobuf_large_cache_size": 16 00:21:44.646 } 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "method": "bdev_raid_set_options", 00:21:44.646 "params": { 00:21:44.646 "process_window_size_kb": 1024, 00:21:44.646 "process_max_bandwidth_mb_sec": 0 00:21:44.646 } 00:21:44.646 }, 00:21:44.646 { 00:21:44.646 "method": "bdev_iscsi_set_options", 00:21:44.646 "params": { 00:21:44.646 "timeout_sec": 30 00:21:44.647 } 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "method": "bdev_nvme_set_options", 00:21:44.647 "params": { 00:21:44.647 "action_on_timeout": "none", 00:21:44.647 "timeout_us": 0, 00:21:44.647 "timeout_admin_us": 0, 00:21:44.647 "keep_alive_timeout_ms": 10000, 00:21:44.647 "arbitration_burst": 0, 00:21:44.647 "low_priority_weight": 0, 00:21:44.647 "medium_priority_weight": 0, 00:21:44.647 "high_priority_weight": 0, 00:21:44.647 "nvme_adminq_poll_period_us": 10000, 00:21:44.647 "nvme_ioq_poll_period_us": 0, 00:21:44.647 "io_queue_requests": 512, 00:21:44.647 "delay_cmd_submit": true, 00:21:44.647 "transport_retry_count": 4, 00:21:44.647 "bdev_retry_count": 3, 00:21:44.647 "transport_ack_timeout": 0, 00:21:44.647 "ctrlr_loss_timeout_sec": 0, 00:21:44.647 "reconnect_delay_sec": 0, 00:21:44.647 "fast_io_fail_timeout_sec": 0, 00:21:44.647 "disable_auto_failback": false, 00:21:44.647 "generate_uuids": false, 00:21:44.647 "transport_tos": 0, 00:21:44.647 "nvme_error_stat": false, 00:21:44.647 "rdma_srq_size": 0, 00:21:44.647 "io_path_stat": false, 00:21:44.647 "allow_accel_sequence": false, 00:21:44.647 "rdma_max_cq_size": 0, 00:21:44.647 "rdma_cm_event_timeout_ms": 0, 00:21:44.647 "dhchap_digests": [ 00:21:44.647 "sha256", 00:21:44.647 "sha384", 00:21:44.647 "sha512" 00:21:44.647 ], 00:21:44.647 "dhchap_dhgroups": [ 00:21:44.647 "null", 00:21:44.647 "ffdhe2048", 00:21:44.647 "ffdhe3072", 00:21:44.647 "ffdhe4096", 00:21:44.647 "ffdhe6144", 00:21:44.647 "ffdhe8192" 00:21:44.647 ] 00:21:44.647 } 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "method": "bdev_nvme_attach_controller", 00:21:44.647 "params": { 00:21:44.647 "name": "nvme0", 00:21:44.647 "trtype": "TCP", 00:21:44.647 "adrfam": "IPv4", 00:21:44.647 "traddr": "127.0.0.1", 00:21:44.647 "trsvcid": "4420", 00:21:44.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.647 "prchk_reftag": false, 00:21:44.647 "prchk_guard": false, 00:21:44.647 "ctrlr_loss_timeout_sec": 0, 00:21:44.647 "reconnect_delay_sec": 0, 00:21:44.647 "fast_io_fail_timeout_sec": 0, 00:21:44.647 "psk": "key0", 00:21:44.647 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:44.647 "hdgst": false, 00:21:44.647 "ddgst": false, 00:21:44.647 "multipath": "multipath" 00:21:44.647 } 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "method": "bdev_nvme_set_hotplug", 00:21:44.647 "params": { 00:21:44.647 "period_us": 100000, 00:21:44.647 "enable": false 00:21:44.647 } 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "method": "bdev_wait_for_examine" 00:21:44.647 } 00:21:44.647 ] 00:21:44.647 }, 00:21:44.647 { 00:21:44.647 "subsystem": "nbd", 00:21:44.647 "config": [] 00:21:44.647 } 00:21:44.647 ] 00:21:44.647 }' 00:21:44.647 10:11:16 keyring_file -- keyring/file.sh@115 -- # killprocess 85037 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85037 ']' 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85037 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85037 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:44.647 killing process with pid 85037 00:21:44.647 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.647 00:21:44.647 Latency(us) 00:21:44.647 [2024-11-04T10:11:16.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.647 [2024-11-04T10:11:16.817Z] =================================================================================================================== 00:21:44.647 [2024-11-04T10:11:16.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85037' 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@971 -- # kill 85037 00:21:44.647 10:11:16 keyring_file -- common/autotest_common.sh@976 -- # wait 85037 00:21:44.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:44.906 10:11:16 keyring_file -- keyring/file.sh@118 -- # bperfpid=85291 00:21:44.906 10:11:16 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85291 /var/tmp/bperf.sock 00:21:44.906 10:11:16 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85291 ']' 00:21:44.906 10:11:16 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:44.906 10:11:16 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:44.906 10:11:16 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:44.906 10:11:16 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:44.906 10:11:16 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:44.906 "subsystems": [ 00:21:44.906 { 00:21:44.906 "subsystem": "keyring", 00:21:44.906 "config": [ 00:21:44.906 { 00:21:44.906 "method": "keyring_file_add_key", 00:21:44.906 "params": { 00:21:44.906 "name": "key0", 00:21:44.906 "path": "/tmp/tmp.006EvgiQf6" 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "keyring_file_add_key", 00:21:44.906 "params": { 00:21:44.906 "name": "key1", 00:21:44.906 "path": "/tmp/tmp.fbLRBrWSRQ" 00:21:44.906 } 00:21:44.906 } 00:21:44.906 ] 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "subsystem": "iobuf", 00:21:44.906 "config": [ 00:21:44.906 { 00:21:44.906 "method": "iobuf_set_options", 00:21:44.906 "params": { 00:21:44.906 "small_pool_count": 8192, 00:21:44.906 "large_pool_count": 1024, 00:21:44.906 "small_bufsize": 8192, 00:21:44.906 "large_bufsize": 135168, 00:21:44.906 "enable_numa": false 00:21:44.906 } 00:21:44.906 } 00:21:44.906 ] 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "subsystem": "sock", 00:21:44.906 "config": [ 00:21:44.906 { 00:21:44.906 "method": "sock_set_default_impl", 00:21:44.906 "params": { 00:21:44.906 "impl_name": "uring" 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "sock_impl_set_options", 00:21:44.906 "params": { 00:21:44.906 "impl_name": "ssl", 00:21:44.906 "recv_buf_size": 4096, 00:21:44.906 "send_buf_size": 4096, 00:21:44.906 "enable_recv_pipe": true, 00:21:44.906 "enable_quickack": false, 00:21:44.906 "enable_placement_id": 0, 00:21:44.906 "enable_zerocopy_send_server": true, 00:21:44.906 "enable_zerocopy_send_client": false, 00:21:44.906 "zerocopy_threshold": 0, 00:21:44.906 "tls_version": 0, 00:21:44.906 "enable_ktls": false 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "sock_impl_set_options", 00:21:44.906 "params": { 00:21:44.906 "impl_name": "posix", 00:21:44.906 "recv_buf_size": 2097152, 00:21:44.906 "send_buf_size": 2097152, 00:21:44.906 "enable_recv_pipe": true, 00:21:44.906 "enable_quickack": false, 00:21:44.906 "enable_placement_id": 0, 00:21:44.906 "enable_zerocopy_send_server": true, 00:21:44.906 "enable_zerocopy_send_client": false, 00:21:44.906 "zerocopy_threshold": 0, 00:21:44.906 "tls_version": 0, 00:21:44.906 "enable_ktls": false 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "sock_impl_set_options", 00:21:44.906 "params": { 00:21:44.906 "impl_name": "uring", 00:21:44.906 "recv_buf_size": 2097152, 00:21:44.906 "send_buf_size": 2097152, 00:21:44.906 "enable_recv_pipe": true, 00:21:44.906 "enable_quickack": false, 00:21:44.906 "enable_placement_id": 0, 00:21:44.906 "enable_zerocopy_send_server": false, 00:21:44.906 "enable_zerocopy_send_client": false, 00:21:44.906 "zerocopy_threshold": 0, 00:21:44.906 "tls_version": 0, 00:21:44.906 "enable_ktls": false 00:21:44.906 } 00:21:44.906 } 00:21:44.906 ] 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "subsystem": "vmd", 00:21:44.906 "config": [] 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "subsystem": "accel", 00:21:44.906 "config": [ 00:21:44.906 { 00:21:44.906 "method": "accel_set_options", 00:21:44.906 "params": { 00:21:44.906 "small_cache_size": 128, 00:21:44.906 "large_cache_size": 16, 00:21:44.906 "task_count": 2048, 00:21:44.906 "sequence_count": 2048, 00:21:44.906 "buf_count": 2048 00:21:44.906 } 00:21:44.906 } 00:21:44.906 ] 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "subsystem": "bdev", 00:21:44.906 "config": [ 00:21:44.906 { 00:21:44.906 "method": "bdev_set_options", 00:21:44.906 "params": { 00:21:44.906 "bdev_io_pool_size": 65535, 00:21:44.906 "bdev_io_cache_size": 256, 00:21:44.906 "bdev_auto_examine": true, 00:21:44.906 "iobuf_small_cache_size": 128, 00:21:44.906 "iobuf_large_cache_size": 16 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "bdev_raid_set_options", 00:21:44.906 "params": { 00:21:44.906 "process_window_size_kb": 1024, 00:21:44.906 "process_max_bandwidth_mb_sec": 0 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "bdev_iscsi_set_options", 00:21:44.906 "params": { 00:21:44.906 "timeout_sec": 30 00:21:44.906 } 00:21:44.906 }, 00:21:44.906 { 00:21:44.906 "method": "bdev_nvme_set_options", 00:21:44.906 "params": { 00:21:44.906 "action_on_timeout": "none", 00:21:44.906 "timeout_us": 0, 00:21:44.906 "timeout_admin_us": 0, 00:21:44.906 "keep_alive_timeout_ms": 10000, 00:21:44.906 "arbitration_burst": 0, 00:21:44.906 "low_priority_weight": 0, 00:21:44.906 "medium_priority_weight": 0, 00:21:44.906 "high_priority_weight": 0, 00:21:44.906 "nvme_adminq_poll_period_us": 10000, 00:21:44.906 "nvme_ioq_poll_period_us": 0, 00:21:44.906 "io_queue_requests": 512, 00:21:44.906 "delay_cmd_submit": true, 00:21:44.906 "transport_retry_count": 4, 00:21:44.906 "bdev_retry_count": 3, 00:21:44.906 "transport_ack_timeout": 0, 00:21:44.906 "ctrlr_loss_timeout_sec": 0, 00:21:44.906 "reconnect_delay_sec": 0, 00:21:44.906 "fast_io_fail_timeout_sec": 0, 00:21:44.906 "disable_auto_failback": false, 00:21:44.906 "generate_uuids": false, 00:21:44.906 "transport_tos": 0, 00:21:44.906 "nvme_error_stat": false, 00:21:44.906 "rdma_srq_size": 0, 00:21:44.906 "io_path_stat": false, 00:21:44.906 "allow_accel_sequence": false, 00:21:44.907 "rdma_max_cq_size": 0, 00:21:44.907 "rdma_cm_event_timeout_ms": 0, 00:21:44.907 "dhchap_digests": [ 00:21:44.907 "sha256", 00:21:44.907 "sha384", 00:21:44.907 "sha512" 00:21:44.907 ], 00:21:44.907 "dhchap_dhgroups": [ 00:21:44.907 "null", 00:21:44.907 "ffdhe2048", 00:21:44.907 "ffdhe3072", 00:21:44.907 "ffdhe4096", 00:21:44.907 "ffdhe6144", 00:21:44.907 "ffdhe8192" 00:21:44.907 ] 00:21:44.907 } 00:21:44.907 }, 00:21:44.907 { 00:21:44.907 "method": "bdev_nvme_attach_controller", 00:21:44.907 "params": { 00:21:44.907 "name": "nvme0", 00:21:44.907 "trtype": "TCP", 00:21:44.907 "adrfam": "IPv4", 00:21:44.907 "traddr": "127.0.0.1", 00:21:44.907 "trsvcid": "4420", 00:21:44.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.907 "prchk_reftag": false, 00:21:44.907 "prchk_guard": false, 00:21:44.907 "ctrlr_loss_timeout_sec": 0, 00:21:44.907 "reconnect_delay_sec": 0, 00:21:44.907 "fast_io_fail_timeout_sec": 0, 00:21:44.907 "psk": "key0", 00:21:44.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:44.907 "hdgst": false, 00:21:44.907 "ddgst": false, 00:21:44.907 "multipath": "multipath" 00:21:44.907 } 00:21:44.907 }, 00:21:44.907 { 00:21:44.907 "method": "bdev_nvme_set_hotplug", 00:21:44.907 "params": { 00:21:44.907 "period_us": 100000, 00:21:44.907 "enable": false 00:21:44.907 } 00:21:44.907 }, 00:21:44.907 { 00:21:44.907 "method": "bdev_wait_for_examine" 00:21:44.907 } 00:21:44.907 ] 00:21:44.907 }, 00:21:44.907 { 00:21:44.907 "subsystem": "nbd", 00:21:44.907 "config": [] 00:21:44.907 } 00:21:44.907 ] 00:21:44.907 }' 00:21:44.907 10:11:16 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:44.907 10:11:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.907 [2024-11-04 10:11:16.919154] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:21:44.907 [2024-11-04 10:11:16.919566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85291 ] 00:21:44.907 [2024-11-04 10:11:17.064986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.165 [2024-11-04 10:11:17.115912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.165 [2024-11-04 10:11:17.255632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.165 [2024-11-04 10:11:17.310055] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.972 10:11:17 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:45.972 10:11:17 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:45.972 10:11:17 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:45.972 10:11:17 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:45.972 10:11:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.254 10:11:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:46.254 10:11:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:46.254 10:11:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:46.254 10:11:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:46.254 10:11:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.254 10:11:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.254 10:11:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:46.516 10:11:18 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:46.516 10:11:18 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:46.516 10:11:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:46.516 10:11:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:46.516 10:11:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.516 10:11:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:46.516 10:11:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.775 10:11:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:46.775 10:11:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:46.775 10:11:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:46.775 10:11:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:47.034 10:11:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:47.034 10:11:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:47.034 10:11:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.006EvgiQf6 /tmp/tmp.fbLRBrWSRQ 00:21:47.034 10:11:19 keyring_file -- keyring/file.sh@20 -- # killprocess 85291 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85291 ']' 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85291 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85291 00:21:47.034 killing process with pid 85291 00:21:47.034 Received shutdown signal, test time was about 1.000000 seconds 00:21:47.034 00:21:47.034 Latency(us) 00:21:47.034 [2024-11-04T10:11:19.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.034 [2024-11-04T10:11:19.204Z] =================================================================================================================== 00:21:47.034 [2024-11-04T10:11:19.204Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85291' 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@971 -- # kill 85291 00:21:47.034 10:11:19 keyring_file -- common/autotest_common.sh@976 -- # wait 85291 00:21:47.293 10:11:19 keyring_file -- keyring/file.sh@21 -- # killprocess 85026 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85026 ']' 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85026 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85026 00:21:47.293 killing process with pid 85026 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85026' 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@971 -- # kill 85026 00:21:47.293 10:11:19 keyring_file -- common/autotest_common.sh@976 -- # wait 85026 00:21:47.860 ************************************ 00:21:47.860 END TEST keyring_file 00:21:47.860 ************************************ 00:21:47.860 00:21:47.860 real 0m16.498s 00:21:47.860 user 0m42.173s 00:21:47.860 sys 0m3.179s 00:21:47.860 10:11:19 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:47.860 10:11:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:47.860 10:11:19 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:21:47.860 10:11:19 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:47.860 10:11:19 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:47.860 10:11:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:47.860 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:21:47.860 ************************************ 00:21:47.860 START TEST keyring_linux 00:21:47.860 ************************************ 00:21:47.860 10:11:19 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:47.860 Joined session keyring: 262455617 00:21:47.860 * Looking for test storage... 00:21:47.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:47.860 10:11:19 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:47.860 10:11:19 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:21:47.860 10:11:19 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:48.119 10:11:20 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.119 10:11:20 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:48.119 10:11:20 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.119 10:11:20 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.119 --rc genhtml_branch_coverage=1 00:21:48.119 --rc genhtml_function_coverage=1 00:21:48.119 --rc genhtml_legend=1 00:21:48.119 --rc geninfo_all_blocks=1 00:21:48.119 --rc geninfo_unexecuted_blocks=1 00:21:48.119 00:21:48.119 ' 00:21:48.119 10:11:20 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:48.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.120 --rc genhtml_branch_coverage=1 00:21:48.120 --rc genhtml_function_coverage=1 00:21:48.120 --rc genhtml_legend=1 00:21:48.120 --rc geninfo_all_blocks=1 00:21:48.120 --rc geninfo_unexecuted_blocks=1 00:21:48.120 00:21:48.120 ' 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:48.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.120 --rc genhtml_branch_coverage=1 00:21:48.120 --rc genhtml_function_coverage=1 00:21:48.120 --rc genhtml_legend=1 00:21:48.120 --rc geninfo_all_blocks=1 00:21:48.120 --rc geninfo_unexecuted_blocks=1 00:21:48.120 00:21:48.120 ' 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:48.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.120 --rc genhtml_branch_coverage=1 00:21:48.120 --rc genhtml_function_coverage=1 00:21:48.120 --rc genhtml_legend=1 00:21:48.120 --rc geninfo_all_blocks=1 00:21:48.120 --rc geninfo_unexecuted_blocks=1 00:21:48.120 00:21:48.120 ' 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:89901d6b-8f02-4106-8c0e-f8e118ca6735 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=89901d6b-8f02-4106-8c0e-f8e118ca6735 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:48.120 10:11:20 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.120 10:11:20 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.120 10:11:20 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.120 10:11:20 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.120 10:11:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.120 10:11:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.120 10:11:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.120 10:11:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:48.120 10:11:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:48.120 /tmp/:spdk-test:key0 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:48.120 10:11:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:48.120 /tmp/:spdk-test:key1 00:21:48.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.120 10:11:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85418 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:48.120 10:11:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85418 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85418 ']' 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:48.120 10:11:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:48.379 [2024-11-04 10:11:20.299105] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:21:48.379 [2024-11-04 10:11:20.299507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85418 ] 00:21:48.379 [2024-11-04 10:11:20.449196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.379 [2024-11-04 10:11:20.512062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.638 [2024-11-04 10:11:20.596201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:48.897 10:11:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:48.897 [2024-11-04 10:11:20.838524] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.897 null0 00:21:48.897 [2024-11-04 10:11:20.870502] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:48.897 [2024-11-04 10:11:20.870731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.897 10:11:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:48.897 26249722 00:21:48.897 10:11:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:48.897 514437 00:21:48.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:48.897 10:11:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85429 00:21:48.897 10:11:20 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:48.897 10:11:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85429 /var/tmp/bperf.sock 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85429 ']' 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:48.897 10:11:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:48.897 [2024-11-04 10:11:20.959361] Starting SPDK v25.01-pre git sha1 fcc19e276 / DPDK 24.03.0 initialization... 00:21:48.897 [2024-11-04 10:11:20.959664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85429 ] 00:21:49.156 [2024-11-04 10:11:21.111231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.156 [2024-11-04 10:11:21.191050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.092 10:11:22 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.092 10:11:22 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:50.092 10:11:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:50.092 10:11:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:50.350 10:11:22 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:50.350 10:11:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:50.609 [2024-11-04 10:11:22.706991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:50.609 10:11:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:50.609 10:11:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:50.867 [2024-11-04 10:11:23.030711] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.125 nvme0n1 00:21:51.125 10:11:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:51.125 10:11:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:51.125 10:11:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:51.125 10:11:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:51.125 10:11:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:51.125 10:11:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.383 10:11:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:51.383 10:11:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:51.383 10:11:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:51.383 10:11:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:51.383 10:11:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:51.383 10:11:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.383 10:11:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@25 -- # sn=26249722 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 26249722 == \2\6\2\4\9\7\2\2 ]] 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 26249722 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:51.641 10:11:23 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:51.899 Running I/O for 1 seconds... 00:21:52.833 12429.00 IOPS, 48.55 MiB/s 00:21:52.833 Latency(us) 00:21:52.833 [2024-11-04T10:11:25.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.833 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:52.833 nvme0n1 : 1.01 12440.86 48.60 0.00 0.00 10235.54 5451.40 16205.27 00:21:52.833 [2024-11-04T10:11:25.003Z] =================================================================================================================== 00:21:52.833 [2024-11-04T10:11:25.003Z] Total : 12440.86 48.60 0.00 0.00 10235.54 5451.40 16205.27 00:21:52.833 { 00:21:52.833 "results": [ 00:21:52.833 { 00:21:52.833 "job": "nvme0n1", 00:21:52.833 "core_mask": "0x2", 00:21:52.833 "workload": "randread", 00:21:52.833 "status": "finished", 00:21:52.833 "queue_depth": 128, 00:21:52.833 "io_size": 4096, 00:21:52.833 "runtime": 1.009416, 00:21:52.833 "iops": 12440.856891509546, 00:21:52.833 "mibps": 48.59709723245916, 00:21:52.833 "io_failed": 0, 00:21:52.833 "io_timeout": 0, 00:21:52.833 "avg_latency_us": 10235.538041089345, 00:21:52.833 "min_latency_us": 5451.403636363636, 00:21:52.833 "max_latency_us": 16205.265454545455 00:21:52.833 } 00:21:52.833 ], 00:21:52.833 "core_count": 1 00:21:52.833 } 00:21:52.833 10:11:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:52.833 10:11:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:53.092 10:11:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:53.092 10:11:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:53.092 10:11:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:53.092 10:11:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:53.092 10:11:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:53.092 10:11:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:53.658 10:11:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:53.658 10:11:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:53.658 10:11:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:53.658 10:11:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.658 10:11:25 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:53.658 10:11:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:53.917 [2024-11-04 10:11:25.865736] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:53.917 [2024-11-04 10:11:25.866625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174ad20 (107): Transport endpoint is not connected 00:21:53.917 [2024-11-04 10:11:25.867615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174ad20 (9): Bad file descriptor 00:21:53.917 [2024-11-04 10:11:25.868612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:53.917 [2024-11-04 10:11:25.868635] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:53.917 [2024-11-04 10:11:25.868645] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:53.917 [2024-11-04 10:11:25.868657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:53.917 request: 00:21:53.917 { 00:21:53.917 "name": "nvme0", 00:21:53.917 "trtype": "tcp", 00:21:53.917 "traddr": "127.0.0.1", 00:21:53.917 "adrfam": "ipv4", 00:21:53.917 "trsvcid": "4420", 00:21:53.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.917 "prchk_reftag": false, 00:21:53.917 "prchk_guard": false, 00:21:53.917 "hdgst": false, 00:21:53.917 "ddgst": false, 00:21:53.917 "psk": ":spdk-test:key1", 00:21:53.917 "allow_unrecognized_csi": false, 00:21:53.917 "method": "bdev_nvme_attach_controller", 00:21:53.917 "req_id": 1 00:21:53.917 } 00:21:53.917 Got JSON-RPC error response 00:21:53.917 response: 00:21:53.917 { 00:21:53.917 "code": -5, 00:21:53.917 "message": "Input/output error" 00:21:53.917 } 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@33 -- # sn=26249722 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 26249722 00:21:53.917 1 links removed 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@33 -- # sn=514437 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 514437 00:21:53.917 1 links removed 00:21:53.917 10:11:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85429 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85429 ']' 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85429 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85429 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:53.917 killing process with pid 85429 00:21:53.917 Received shutdown signal, test time was about 1.000000 seconds 00:21:53.917 00:21:53.917 Latency(us) 00:21:53.917 [2024-11-04T10:11:26.087Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.917 [2024-11-04T10:11:26.087Z] =================================================================================================================== 00:21:53.917 [2024-11-04T10:11:26.087Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85429' 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@971 -- # kill 85429 00:21:53.917 10:11:25 keyring_linux -- common/autotest_common.sh@976 -- # wait 85429 00:21:54.176 10:11:26 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85418 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85418 ']' 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85418 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85418 00:21:54.176 killing process with pid 85418 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85418' 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@971 -- # kill 85418 00:21:54.176 10:11:26 keyring_linux -- common/autotest_common.sh@976 -- # wait 85418 00:21:54.480 00:21:54.480 real 0m6.744s 00:21:54.480 user 0m13.600s 00:21:54.480 sys 0m1.727s 00:21:54.480 10:11:26 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:54.480 ************************************ 00:21:54.480 END TEST keyring_linux 00:21:54.480 ************************************ 00:21:54.480 10:11:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:54.753 10:11:26 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:54.753 10:11:26 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:54.753 10:11:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:54.753 10:11:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:54.753 10:11:26 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:54.753 10:11:26 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:54.753 10:11:26 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:54.753 10:11:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.753 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:21:54.753 10:11:26 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:54.753 10:11:26 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:54.753 10:11:26 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:54.753 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.654 INFO: APP EXITING 00:21:56.654 INFO: killing all VMs 00:21:56.654 INFO: killing vhost app 00:21:56.654 INFO: EXIT DONE 00:21:57.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:57.221 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:57.221 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:58.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:58.196 Cleaning 00:21:58.196 Removing: /var/run/dpdk/spdk0/config 00:21:58.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:58.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:58.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:58.196 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:58.196 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:58.196 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:58.196 Removing: /var/run/dpdk/spdk1/config 00:21:58.196 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:58.196 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:58.196 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:58.196 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:58.196 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:58.196 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:58.196 Removing: /var/run/dpdk/spdk2/config 00:21:58.196 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:58.196 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:58.196 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:58.196 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:58.196 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:58.196 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:58.196 Removing: /var/run/dpdk/spdk3/config 00:21:58.196 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:58.196 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:58.196 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:58.196 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:58.196 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:58.196 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:58.196 Removing: /var/run/dpdk/spdk4/config 00:21:58.196 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:58.196 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:58.196 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:58.196 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:58.196 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:58.196 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:58.196 Removing: /dev/shm/nvmf_trace.0 00:21:58.196 Removing: /dev/shm/spdk_tgt_trace.pid56710 00:21:58.196 Removing: /var/run/dpdk/spdk0 00:21:58.196 Removing: /var/run/dpdk/spdk1 00:21:58.196 Removing: /var/run/dpdk/spdk2 00:21:58.196 Removing: /var/run/dpdk/spdk3 00:21:58.196 Removing: /var/run/dpdk/spdk4 00:21:58.196 Removing: /var/run/dpdk/spdk_pid56552 00:21:58.196 Removing: /var/run/dpdk/spdk_pid56710 00:21:58.196 Removing: /var/run/dpdk/spdk_pid56909 00:21:58.196 Removing: /var/run/dpdk/spdk_pid56990 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57015 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57125 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57135 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57270 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57471 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57625 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57698 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57780 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57866 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57949 00:21:58.196 Removing: /var/run/dpdk/spdk_pid57982 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58012 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58087 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58172 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58623 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58670 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58713 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58729 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58796 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58805 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58872 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58888 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58933 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58944 00:21:58.197 Removing: /var/run/dpdk/spdk_pid58990 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59008 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59139 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59174 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59257 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59583 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59601 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59632 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59651 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59661 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59685 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59699 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59720 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59739 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59747 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59768 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59787 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59806 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59816 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59835 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59854 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59875 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59894 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59902 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59923 00:21:58.197 Removing: /var/run/dpdk/spdk_pid59959 00:21:58.456 Removing: /var/run/dpdk/spdk_pid59969 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60004 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60076 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60099 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60114 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60137 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60152 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60160 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60204 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60218 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60246 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60256 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60265 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60276 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60284 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60300 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60305 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60319 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60343 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60375 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60380 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60413 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60423 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60430 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60471 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60482 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60513 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60516 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60529 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60537 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60544 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60552 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60559 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60568 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60650 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60699 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60812 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60851 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60896 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60910 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60927 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60947 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60984 00:21:58.456 Removing: /var/run/dpdk/spdk_pid60994 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61072 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61099 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61143 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61211 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61267 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61297 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61393 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61440 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61478 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61705 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61802 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61831 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61862 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61894 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61925 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61961 00:21:58.456 Removing: /var/run/dpdk/spdk_pid61998 00:21:58.456 Removing: /var/run/dpdk/spdk_pid62407 00:21:58.456 Removing: /var/run/dpdk/spdk_pid62445 00:21:58.456 Removing: /var/run/dpdk/spdk_pid62799 00:21:58.456 Removing: /var/run/dpdk/spdk_pid63258 00:21:58.456 Removing: /var/run/dpdk/spdk_pid63533 00:21:58.456 Removing: /var/run/dpdk/spdk_pid64409 00:21:58.456 Removing: /var/run/dpdk/spdk_pid65321 00:21:58.456 Removing: /var/run/dpdk/spdk_pid65444 00:21:58.456 Removing: /var/run/dpdk/spdk_pid65506 00:21:58.456 Removing: /var/run/dpdk/spdk_pid66919 00:21:58.456 Removing: /var/run/dpdk/spdk_pid67233 00:21:58.456 Removing: /var/run/dpdk/spdk_pid70989 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71354 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71464 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71597 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71625 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71646 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71675 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71767 00:21:58.456 Removing: /var/run/dpdk/spdk_pid71903 00:21:58.456 Removing: /var/run/dpdk/spdk_pid72056 00:21:58.456 Removing: /var/run/dpdk/spdk_pid72126 00:21:58.456 Removing: /var/run/dpdk/spdk_pid72326 00:21:58.456 Removing: /var/run/dpdk/spdk_pid72401 00:21:58.456 Removing: /var/run/dpdk/spdk_pid72494 00:21:58.715 Removing: /var/run/dpdk/spdk_pid72861 00:21:58.715 Removing: /var/run/dpdk/spdk_pid73290 00:21:58.715 Removing: /var/run/dpdk/spdk_pid73291 00:21:58.715 Removing: /var/run/dpdk/spdk_pid73292 00:21:58.715 Removing: /var/run/dpdk/spdk_pid73555 00:21:58.715 Removing: /var/run/dpdk/spdk_pid73879 00:21:58.715 Removing: /var/run/dpdk/spdk_pid73885 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74209 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74223 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74243 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74268 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74273 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74637 00:21:58.715 Removing: /var/run/dpdk/spdk_pid74686 00:21:58.715 Removing: /var/run/dpdk/spdk_pid75019 00:21:58.715 Removing: /var/run/dpdk/spdk_pid75222 00:21:58.715 Removing: /var/run/dpdk/spdk_pid75646 00:21:58.715 Removing: /var/run/dpdk/spdk_pid76200 00:21:58.715 Removing: /var/run/dpdk/spdk_pid77089 00:21:58.715 Removing: /var/run/dpdk/spdk_pid77718 00:21:58.715 Removing: /var/run/dpdk/spdk_pid77720 00:21:58.715 Removing: /var/run/dpdk/spdk_pid79746 00:21:58.715 Removing: /var/run/dpdk/spdk_pid79799 00:21:58.715 Removing: /var/run/dpdk/spdk_pid79852 00:21:58.715 Removing: /var/run/dpdk/spdk_pid79913 00:21:58.715 Removing: /var/run/dpdk/spdk_pid80034 00:21:58.715 Removing: /var/run/dpdk/spdk_pid80094 00:21:58.715 Removing: /var/run/dpdk/spdk_pid80154 00:21:58.715 Removing: /var/run/dpdk/spdk_pid80208 00:21:58.715 Removing: /var/run/dpdk/spdk_pid80579 00:21:58.715 Removing: /var/run/dpdk/spdk_pid81788 00:21:58.715 Removing: /var/run/dpdk/spdk_pid81940 00:21:58.715 Removing: /var/run/dpdk/spdk_pid82170 00:21:58.715 Removing: /var/run/dpdk/spdk_pid82764 00:21:58.715 Removing: /var/run/dpdk/spdk_pid82924 00:21:58.715 Removing: /var/run/dpdk/spdk_pid83081 00:21:58.715 Removing: /var/run/dpdk/spdk_pid83178 00:21:58.715 Removing: /var/run/dpdk/spdk_pid83348 00:21:58.715 Removing: /var/run/dpdk/spdk_pid83457 00:21:58.715 Removing: /var/run/dpdk/spdk_pid84161 00:21:58.716 Removing: /var/run/dpdk/spdk_pid84192 00:21:58.716 Removing: /var/run/dpdk/spdk_pid84228 00:21:58.716 Removing: /var/run/dpdk/spdk_pid84484 00:21:58.716 Removing: /var/run/dpdk/spdk_pid84522 00:21:58.716 Removing: /var/run/dpdk/spdk_pid84557 00:21:58.716 Removing: /var/run/dpdk/spdk_pid85026 00:21:58.716 Removing: /var/run/dpdk/spdk_pid85037 00:21:58.716 Removing: /var/run/dpdk/spdk_pid85291 00:21:58.716 Removing: /var/run/dpdk/spdk_pid85418 00:21:58.716 Removing: /var/run/dpdk/spdk_pid85429 00:21:58.716 Clean 00:21:58.716 10:11:30 -- common/autotest_common.sh@1451 -- # return 0 00:21:58.716 10:11:30 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:58.716 10:11:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.716 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:21:58.716 10:11:30 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:58.716 10:11:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:58.716 10:11:30 -- common/autotest_common.sh@10 -- # set +x 00:21:58.974 10:11:30 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:58.974 10:11:30 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:58.974 10:11:30 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:58.974 10:11:30 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:58.974 10:11:30 -- spdk/autotest.sh@394 -- # hostname 00:21:58.974 10:11:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:59.232 geninfo: WARNING: invalid characters removed from testname! 00:22:31.372 10:12:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:32.305 10:12:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:34.837 10:12:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:38.124 10:12:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:40.682 10:12:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.966 10:12:15 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:47.251 10:12:18 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:47.251 10:12:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:47.251 10:12:18 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:47.251 10:12:18 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:47.251 10:12:18 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:47.251 10:12:18 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:47.251 + [[ -n 5200 ]] 00:22:47.251 + sudo kill 5200 00:22:47.259 [Pipeline] } 00:22:47.275 [Pipeline] // timeout 00:22:47.280 [Pipeline] } 00:22:47.294 [Pipeline] // stage 00:22:47.298 [Pipeline] } 00:22:47.311 [Pipeline] // catchError 00:22:47.319 [Pipeline] stage 00:22:47.322 [Pipeline] { (Stop VM) 00:22:47.332 [Pipeline] sh 00:22:47.616 + vagrant halt 00:22:51.819 ==> default: Halting domain... 00:22:57.102 [Pipeline] sh 00:22:57.382 + vagrant destroy -f 00:23:01.565 ==> default: Removing domain... 00:23:01.575 [Pipeline] sh 00:23:01.854 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_4/output 00:23:01.862 [Pipeline] } 00:23:01.876 [Pipeline] // stage 00:23:01.881 [Pipeline] } 00:23:01.893 [Pipeline] // dir 00:23:01.898 [Pipeline] } 00:23:01.911 [Pipeline] // wrap 00:23:01.917 [Pipeline] } 00:23:01.928 [Pipeline] // catchError 00:23:01.936 [Pipeline] stage 00:23:01.938 [Pipeline] { (Epilogue) 00:23:01.950 [Pipeline] sh 00:23:02.230 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:08.841 [Pipeline] catchError 00:23:08.843 [Pipeline] { 00:23:08.854 [Pipeline] sh 00:23:09.180 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:09.180 Artifacts sizes are good 00:23:09.188 [Pipeline] } 00:23:09.202 [Pipeline] // catchError 00:23:09.212 [Pipeline] archiveArtifacts 00:23:09.218 Archiving artifacts 00:23:09.344 [Pipeline] cleanWs 00:23:09.354 [WS-CLEANUP] Deleting project workspace... 00:23:09.354 [WS-CLEANUP] Deferred wipeout is used... 00:23:09.361 [WS-CLEANUP] done 00:23:09.363 [Pipeline] } 00:23:09.377 [Pipeline] // stage 00:23:09.382 [Pipeline] } 00:23:09.396 [Pipeline] // node 00:23:09.400 [Pipeline] End of Pipeline 00:23:09.436 Finished: SUCCESS